--- Log opened Thu Oct 12 00:00:53 2023 00:39 -!- Jenda [~jenda@coralmyn.hrach.eu] has quit [Ping timeout: 255 seconds] 00:40 -!- Jenda [~jenda@coralmyn.hrach.eu] has joined #hplusroadmap 00:40 < hprmbridge> Eli> You can get around chatgpt being a karen by asking it questions in zulu or gaelic https://cdn.discordapp.com/attachments/1064664282450628710/1161931443807719505/image.png?ex=653a17fb&is=6527a2fb&hm=6a053c4e4fbc76f9bda2ff1264ad59167969ae2b7296c91b11c75b9d0df60257& 00:44 < muurkha> nmz787: can't you just install an app from F-Droid? 00:57 < fenn> no, they removed the system permission necessary to record calls from the entire OS 00:58 < fenn> so apparently jake cannell made a little startup called vast.ai :D 01:02 < muurkha> oh, so you have to install LineageOS? Or use a Bluetooth gizmo, I guess 01:06 -!- Llamamoe [~Llamamoe@46.204.77.176] has joined #hplusroadmap 01:21 < fenn> "Perhaps, like me, you wish to become posthuman: to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes, to grow more intelligent, wise, wealthy, and connected, to explore the multiverse, perhaps eventually to split, merge, and change - to vasten." 02:01 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 04:09 -!- yashgaroth [~ffffffff@2605:a601:a0e0:8f00:8545:cee6:7e30:8e56] has joined #hplusroadmap 04:52 < sphertext_> this is in fact exceedingly based https://metric-time.com/ 04:55 < sphertext_> fenn the disembodied mind doesn't make sense. the self is determined not only by ideas, not only even by the brain, not even by all your other cells, like immune system and so on; but rather, even your gut biome affects mood. you're not the same person you were yesterday morning, so how can you stay the same person if you suddenly altered your whole physiology? 05:31 < hprmbridge> Eli> the original programmers made a mistake and now we can't deprecate due to institutional inertia... 07:02 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Remote host closed the connection] 07:06 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 08:06 < hprmbridge> soul_syrup> https://www.researchgate.net/publication/374615955_Artificial_Intelligence_for_EEG_Prediction_Applied_Chaos_Theory#fullTextFileContent 08:15 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 08:24 < hprmbridge> nmz787> spheretext, here's a simple example of stupidly simple code updates from GPT.... I was sitting at my kid's gymnastics class working on debugging CAD geometry, and I needed to cancel out some conversion from real-world units to CAD database units... and I wanted it to handle a number as well as lists and tuples of them, so I wrote this function which immediately failed because my testcase was 08:24 < hprmbridge> nmz787> actually a list of tuples (my eyes didn't pickup on that first), and I was just too distracted (loud noises, uncomfortable chairs, other people that I was paranoid could be eavesdropping)... but I knew generally how a solution could be approximated... so I asked GPT this: 08:24 < hprmbridge> nmz787> 08:24 < hprmbridge> nmz787> enhance this function to use recursion if an element in an iterable thing is also an iterable: 08:24 < hprmbridge> nmz787> def cancel_precision(thing): 08:24 < hprmbridge> nmz787> if isinstance(thing, tuple): 08:24 < hprmbridge> nmz787> return tuple(i/self.precision for i in thing) 08:24 < hprmbridge> nmz787> elif isinstance(thing, list): 08:24 < hprmbridge> nmz787> return [i / self.precision for i in thing] 08:24 < hprmbridge> nmz787> return thing/self.precision 08:24 < hprmbridge> nmz787> and it immediately spit out: 08:24 < hprmbridge> nmz787> 08:24 < hprmbridge> nmz787> def cancel_precision(thing): 08:24 < hprmbridge> nmz787> if isinstance(thing, tuple): 08:24 < hprmbridge> nmz787> return tuple(cancel_precision(i) for i in thing) 08:24 < hprmbridge> nmz787> elif isinstance(thing, list): 08:24 < hprmbridge> nmz787> return [cancel_precision(i) for i in thing] 08:24 < hprmbridge> nmz787> return thing / self.precision 08:24 < hprmbridge> nmz787> which is like, "oh yeah, that's right" 08:25 < hprmbridge> nmz787> in terms of comprehensibility 09:08 -!- rabbit_logic [~rabbit.lo@66.51.112.246] has joined #hplusroadmap 09:19 -!- s0ph1a [sid246387@helmsley.irccloud.com] has quit [Read error: No route to host] 09:20 -!- s0ph1a [sid246387@id-246387.helmsley.irccloud.com] has joined #hplusroadmap 09:20 -!- drmeister [sid45147@ilkley.irccloud.com] has quit [Read error: Connection reset by peer] 09:20 -!- dartmouthed [~blackunsp@li761-35.members.linode.com] has quit [Quit: ZNC 1.8.2 - https://znc.in] 09:20 -!- drmeister [sid45147@id-45147.ilkley.irccloud.com] has joined #hplusroadmap 09:22 -!- pasky [~pasky@nikam.ms.mff.cuni.cz] has quit [Ping timeout: 246 seconds] 09:23 -!- pasky [~pasky@nikam.ms.mff.cuni.cz] has joined #hplusroadmap 09:28 -!- dartmouthed [~blackunsp@li761-35.members.linode.com] has joined #hplusroadmap 09:32 -!- Llamamoe [~Llamamoe@46.204.77.176] has quit [Quit: Leaving.] 11:15 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 11:19 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 260 seconds] 12:32 < kanzure> "Tired of shortages, OpenAI considers making its own AI chips" how they gonna get scheduled any more than nvidia on the fabs? 12:33 < muurkha> they might use less demanding processes 12:43 < hprmbridge> nmz787> Intel foundry 12:43 < hprmbridge> nmz787> Maybe 12:52 < hprmbridge> nmz787> Muurkha I doubt it, they want as much processing performance as they can get, and seem to have the customer demand to support paying for it upfront (as opposed to later in datacenter real estate footprint and power bills) 12:53 < muurkha> they might be willing to spend more power or datacenter real estate than nvidia's target user. or, as you say, less 13:40 -!- sphertext_ [~sphertext@user/sphertext] has quit [Ping timeout: 245 seconds] 13:42 -!- sphertext_ [~sphertext@user/sphertext] has joined #hplusroadmap 13:45 < fenn> or maybe they don't need all that other crap and just want 4 bit int hardware interleaved with fast ram 13:47 < fenn> you know, the "neuromorphic" thing we've been hearing about since the 1980s 13:54 < fenn> or maybe slow ram is good enough 13:55 < fenn> by sprinkling compute around instead of centralizing it, you greatly increase the number of parallel data channels (total ram bandwidth) 13:56 < fenn> i'm not fully bought into jake's "the brain is optimal" argument, but certainly there are some huge mismatches between the algorithm we're converging on and the hardware we're using 14:13 -!- justanot1 [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 252 seconds] 14:17 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 14:29 < hprmbridge> nmz787> https://linustechtips.com/topic/1502638-intel-patent-details-new-adm-cache-name-totally-different-from-competing-company/ 14:42 < fenn> a slightly larger cache is not going to make any difference 14:45 < fenn> is it > 1GB? the hype around ADM is rather vague 14:45 < hprmbridge> nmz787> That's on chip ram 14:46 < hprmbridge> nmz787> It's however many bits a designer wants above their logic transistors 14:46 < hprmbridge> nmz787> Just slower than sram 14:49 -!- ike8 [12fdf2ee08@irc.cheogram.com] has quit [Ping timeout: 258 seconds] 14:59 -!- ike8 [12fdf2ee08@irc.cheogram.com] has joined #hplusroadmap 16:04 < sphertext> does anybody have a source for peptides which is trusted for purity and identity? 16:15 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 16:15 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 16:41 < hprmbridge> nmz787> My endoplasmic reticulum has been pretty decent 16:42 < sphertext> LOL 16:43 * sphertext cuts nmz787 open to extract some peptides of interest 16:43 -!- catalase [catalase@freebnc.bnc4you.xyz] has joined #hplusroadmap 16:56 < fenn> what happened to mark sims? did he go off to live with aliens or what? 16:57 < fenn> subtitle of his book was "a cautionary tale" 16:58 < fenn> of course it's impossible to get the book 17:30 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 264 seconds] 18:08 < ike8> sphertext: are the only reasons Selegiline is good for longevity is because it prevents dopamine from catabolizing into DOPAL and makes one more active through their life? Or, are there more reasons? 18:21 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has joined #hplusroadmap 18:22 < kanzure> o-90: hello 18:27 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has quit [Quit: Leaving] 19:18 < hprmbridge> nmz787> Kanzure what was the library manager database thing you liked, was it zotero? Something else? 19:33 < kanzure> zotero is good 19:33 < kanzure> yes 19:38 -!- yashgaroth [~ffffffff@2605:a601:a0e0:8f00:8545:cee6:7e30:8e56] has quit [Ping timeout: 240 seconds] 20:13 < docl> huh, notebook mode works way better than chat mode for llms. every time it starts to nosedive into lame wrapup talk (in conclusion, yadda yadda) you can just edit and replace it with something else (which is why, provocative thing you really wanted to get into). conversational flow control 20:35 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has quit [Ping timeout: 264 seconds] 20:42 < docl> I wonder if anyone has written an interface that zaps pleasantries, conclusions, listicles, etc. and replaces them with stimulator phrases automatically 21:24 < fenn> all that stuff comes from the training data. find a better fine tune that doesn't blather 21:25 < fenn> steering vectors might work, but the patch is months old now 21:26 < fenn> at least with gpt-4 you can provide a system prompt to tell it to be concise and direct 21:27 < fenn> some open source fine tunes have system prompt capability, or at least that's the intent 21:30 < fenn> also you can have a fake chat history where it answers questions with the desired style, sometimes that works 21:45 < docl> any recommendation for 7B model? 21:46 < fenn> not really. this is ok https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B 21:47 < fenn> it doesn't have a concept of brevity though 21:48 < fenn> i don't know why we don't have a profusion of LoRAs for different goals/tasks like happened with stable diffusion. setting response length ought to be easy to tune, and there are models that have a response length parameter (e.g. LimaRP) 21:49 < fenn> so a low rank LoRA ought to be able to set the response length, either directly or via a parameter in the prompt 21:52 < fenn> basically nothing has example output, it's bizarre 21:52 < fenn> how hard is it to include a text file in the repo 22:02 < docl> I'll give it a try, thanks 22:16 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 23:27 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 23:27 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap --- Log closed Fri Oct 13 00:00:54 2023