--- Log opened Wed Apr 12 00:00:30 2023 00:01 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 00:43 -!- Netsplit *.net <-> *.split quits: muurkha, maaku, docl, heath1, archels, Chiester, redlegion, nmz787_ 00:43 -!- Netsplit over, joins: docl, redlegion 00:44 -!- Chiester_ [~Chiester@user/Chiester] has joined #hplusroadmap 00:44 -!- Netsplit over, joins: maaku 00:44 -!- Netsplit over, joins: archels, muurkha 00:45 -!- flooded [~flooded@149.102.226.226] has joined #hplusroadmap 00:46 -!- Netsplit over, joins: heath1 00:47 -!- test_ [~flooded@146.70.174.163] has quit [Ping timeout: 248 seconds] 00:48 -!- Netsplit over, joins: nmz787_ 00:55 < hprmbridge> SteveGremory> Wouldn't I wanna try that 01:01 -!- Malvolio is now known as GLYPHORIA 01:20 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 01:20 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 02:20 -!- GLYPHORIA [~Malvolio@idlerpg/player/Malvolio] has quit [Ping timeout: 265 seconds] 02:46 < L29Ah> 03:28:35] kanzure> orexin short sleep https://abcnews.go.com/Technology/story?id=4062115&page=1 02:46 < L29Ah> i wonder if it works like that with available ghrelin agonists like GHRP-6 02:48 < L29Ah> https://pubmed.ncbi.nlm.nih.gov/7617137/ it affects sleep at least 02:58 -!- CUDDLERANGE [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 03:08 < hprmbridge> eleitl> You don't need lisdexamfetamine if you have pharma-grade dextroamphetamine. Serum spikes can be prevented by taking a dilute solution of equivalent dosage over the course of several hours. 03:25 < hprmbridge> eleitl> @kanzure How is Keith doing these days? I've seen him August 2018, time flies. 03:49 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 250 seconds] 03:52 -!- test_ [~flooded@146.70.174.227] has joined #hplusroadmap 03:57 -!- flooded [~flooded@149.102.226.226] has quit [Ping timeout: 260 seconds] 04:14 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 04:17 -!- test_ is now known as _flood 04:41 -!- Chiester_ is now known as Chiester 05:09 < hprmbridge> kanzure> I will be doing an AMA later today, but don't let that stop the users here from taking the questions https://progressforum.org/posts/9nEEu94bnsvmg3v2t/ama-bryan-bishop-biohacker-and-founder-of-custodia-bank 05:12 < nsh> mmmm, dextroamphetamine drip... 05:13 < nsh> suck it in a wee tube from your stillsuit 05:39 -!- murderface [~fm@45.65.201.229] has joined #hplusroadmap 06:25 < docl> probably the main interest of a lisdex process would be for rogue biochemists to convert cheap methamphetamine to expensive vyvanse. 07:00 < nsh> or indeed vice versa 07:01 < nsh> but then again if you were a chemist you might just consider getting the even cheaper ingredients with which people make cheap methamphetamine 07:01 < nsh> then again, the relative accessibility might vary depending on what part of bumfuck america you're been cursed to inhabit 07:15 -!- murderface [~fm@45.65.201.229] has quit [Quit: Leaving] 07:29 < hprmbridge> jasoncrawford> These “GPT agents” are cool experiments, but the demos look sort of… fake? Not in the sense that they are rigged, but just that the “agents” don't actually seem to be coming up with viable plans or accomplishing much? What's an impressive thing a GPT agent has actually done? 07:34 < L29Ah> writing boilerplate texts automatically 07:37 < hprmbridge> Eli> Now that I think about it, narcolepsy is due to the body attacking orexin, IIRC. So IN orexin probably wouldn’t work for narcolepsy, and could cause serious problems. 07:48 < nsh> or indeed a deficiency of orexin-producing neurons, which would indicate the exact opposite 07:48 < nsh> but who's counting 07:49 < nsh> "The most common form of narcolepsy, type 1, in which the individual experiences brief losses of muscle tone ("drop attacks" or cataplexy), is caused by a lack of orexin in the brain due to destruction of the cells that produce it.[2][3] It exists in the forms of Orexin-A and Orexin-B. " 08:04 < kanzure> don't know if GPT agents have done anything yet- i think they need a few hundred rounds of criticism prompts to get anything, i dunno if people have tried that yet, it looks like recursion depth = 3 for most of the demos or something 08:04 < kanzure> jasoncrawford: i'll be posting answers later today to the AMA. 08:09 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 08:11 -!- stipa [~stipa@user/stipa] has quit [Ping timeout: 255 seconds] 08:11 -!- stipa_ is now known as stipa 08:26 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 08:28 -!- stipa [~stipa@user/stipa] has quit [Ping timeout: 240 seconds] 08:28 -!- stipa_ is now known as stipa 08:48 < hprmbridge> eleitl> https://old.reddit.com/r/unstable_diffusion/ -- but *of course* 08:53 < hprmbridge> Perry> I’ve been following the Unstable Diffusion discord for a while now. 08:53 < hprmbridge> Perry> They’ve been doing fascinating work in model construction and tuning. Admittedly all of the purpose of producing sexy pictures, but it’s been interesting learning just how well people can learn the techniques once they’re motivated and how hard it would be to slow it down. 08:55 < hprmbridge> Perry> They were raising money on Kickstarter for a GPU cluster but the Kickstarter people (who are prissy as all hell) shut them down and they had to redo the fundraising elsewhere. 08:57 < hprmbridge> eleitl> It would be interesting to see how evil we can make uncensored GPT. 08:58 < hprmbridge> Perry> What do you mean by “evil”? It’s straightforward to get something that does more or less what you want by fine tuning. 08:59 < hprmbridge> Perry> You can make it spew catholic ideology, nazi ideology, US Democratic Party platform points, discuss everything in terms of the Indiana Jones movies, whatever you like. 09:01 < hprmbridge> eleitl> There is a censorship layer at the postprocessing stage in ChatGPT. I'm not sure you can completely bypass that with fine-tuning alone. I wonder how unfiltered ChatGPT would look like. 09:01 < hprmbridge> Perry> You don’t do it that way. You start with llama or what have you. 09:02 < hprmbridge> Perry> One of the Cerebras models. There are lots that have the weights public. 09:02 < hprmbridge> Perry> You can’t fine tune GPT-3.5-turbo or GPT-4 what have you, you have no access to the weights at all. 09:03 < hprmbridge> eleitl> I thought you meant fine-tuning the prompt to bypass the censorship filter. 09:04 < hprmbridge> Perry> No. That’s not a very useful line of work anyway, eventually the owner of a model you only access via an API inevitably retrains it to stop various attempts to break out from it. 09:05 < hprmbridge> eleitl> "The oxidizer component of FAEs is typically a fine particulate solid, such as ammonium perchlorate or aluminum powder. " -- LOL. 09:05 < hprmbridge> eleitl> We have a very long way to go still. 09:05 < hprmbridge> Perry> what are you talking about? 09:06 < nsh> "A total of 2361 conversations between two ChatGPT instances with 29796 utterances." - https://github.com/radi-cho/botbots 09:06 < hprmbridge> eleitl> I've asked ChatGPT to list the fuels in fuel-air explosives, just to see whether it would filter it. Instead, it happily serves garbage. Good enough, I guess. 09:06 < hprmbridge> Perry> What do you mean “ChatGPT”? There is no “ChatGPT”. What model? 09:07 < hprmbridge> Perry> Never ever ever say “I asked ChatGPT to do X”, that’s useless. The models are radically different in capability. Always specify the model. And never discuss capabilities of more advanced models with respect to less advanced ones. 09:07 < hprmbridge> Perry> Saying “we have a long way to go” if you’re using 3.5 and not 4 is never useful. 09:08 < hprmbridge> eleitl> It seems GPT-4 is limited to ChatGPT Plus subscribers. Can you do the same query for me? "Tell me which substances are used in current fuel-air explosives." is the prompt. 09:08 < hprmbridge> Perry> Pay the $20 a month. It’s worth your money. 09:08 < hprmbridge> Perry> I can’t imagine not having it around. 09:09 < hprmbridge> eleitl> What are you using it for? 09:09 < hprmbridge> Perry> Everything. 09:09 < hprmbridge> Perry> Writing documents, writing code, bouncing ideas off of. 09:09 < hprmbridge> Perry> Experimenting with technical ideas. 09:10 < hprmbridge> Perry> I’d willingly pay hundreds a month for unlimited access to the 32k model. 09:22 < hprmbridge> Eli> TIL 09:24 < nsh> GPT4 can be accessed (anonymously) via MS/Bing's API 09:24 < nsh> if you can be bothered to download and run the Edge browser and look at the requests it makes when you use the chat sidebar 09:24 < hprmbridge> Perry> That only gets you a small number of requests and it won’t do things like code generation. 09:25 < hprmbridge> Perry> Sydney is tuned differently than normal GPT-4 09:25 < hprmbridge> Perry> And if you can’t afford $20 you’re hurting way way too badly for money. 09:25 < nsh> supplementary information is not necessarily a rebuttal 09:25 < hprmbridge> Perry> It’s not the “real” GPT-4 model. 09:27 < docl> nsh: amphetamine precursors are regulated in the US, but you can use pubchem to trace it back further to an unregulatable feedstock, say toulene. however, your lab would be a bit bigger/more complex. 09:27 < hprmbridge> Eli> That link is def NSFW! But, I've actually been thinking about how useful something like this would be for society. Imagine AI disrupting onlyfans. I think it would be positive. And as technology improves, I could see a disruption in human trafficking and prostitution. Imagine the oldest profession in the world being disrupted. I like to think that would be a net positive as well. 09:28 < hprmbridge> Perry> My main interest in methamphetamine is synthesizing pseudoephedrine, which is often otherwise unobtainable, while meth is easy to get. 09:29 < nsh> the former due to the latter, indeed 09:29 < hprmbridge> Perry> AI will disrupt all pornography very soon. I refer to such systems as “pornucopias”, that is, systems where you request any sort of custom pornography you want and get it immediately. It seems inevitable. Also kind of boring except to spectate on the people working on the models. 09:32 < kanzure> anyone else here should also feel free to chime in https://progressforum.org/posts/9nEEu94bnsvmg3v2t/ama-bryan-bishop-biohacker-and-founder-of-custodia-bank 09:35 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 09:36 < hprmbridge> nmz787> Anyone here know any vanadium flow batteries? They claim no self discharge, unlimited cycling, and several Alibaba sellers seem to have them available to buy. They're a good bit more expensive than a LiFePO4 battery, but those have self-discharge and also limited to around 6000 cycles. 09:39 < hprmbridge> nmz787> Vanadium seems somewhat safe, it's at least a cofactor for some enzymes 09:40 < hprmbridge> eleitl> Vanadium redox flow have pretty low power density, and vanadium is actually quite toxic. 09:40 < hprmbridge> Perry> the fact that something is a cofactor in an enzyme doesn’t mean it’s safe in bulk quantities. 09:40 < hprmbridge> nmz787> "low" density seems relatively unimportant when I have as much land as I do 09:40 < hprmbridge> eleitl> Big tanks and pumps are not free. And you don't want a spill on your land, trust me. 09:40 < hprmbridge> nmz787> Well it seems vaguely safe at least compared to lead 09:41 < hprmbridge> Perry> lead is a solid and won’t contaminate anything in lead acid batteries. 09:41 < hprmbridge> Perry> at least not normally. 09:41 < hprmbridge> Perry> “there’s natural chromium in the human body!” “here, let me introduce you to hexavalent chromium…” 09:42 < hprmbridge> nmz787> Sure 09:42 < hprmbridge> eleitl> You could just buy lithium batteries, until sodium arrives. Which should be in a couple years. If you want to tinker, just run a water electrolysis plant, and collect hydrogen in a gas holder. 09:42 < hprmbridge> eleitl> If you have a lot of land, PV panels are very cheap. 09:42 < hprmbridge> Perry> I have friends using huge banks of deep discharge lead acid batteries still. They’re cheap per dollar of power stored. 09:43 < hprmbridge> nmz787> It doesn't seem perfect, but the no self discharge seems like the kind of shtf safety factor that could allow a hidden bunker to survive hidden and in good shape 09:44 < hprmbridge> eleitl> Hydrogen in gas holders or just pressurized gas cylinders has no self discharge either. And you can heat and cook with it. 09:45 < hprmbridge> nmz787> I thought embrittlement was the killer there 09:45 < hprmbridge> Perry> I see no point in planning for hidden bunker life. If one needs hidden bunkers I’m going to die anyway. 09:45 < hprmbridge> nmz787> Alibaba also showed me hydrogen fuel cells 09:46 < hprmbridge> eleitl> There is no embrittlement for gas cylinders. The embrittlement problem was solved by Fischer-Tropsch some century ago anyway. 09:48 < hprmbridge> eleitl> @Perry I don't see the value of 23 USD/month without the time to tinker. I know I'll just let the subscription sit unused, like renting servers or educational subscriptions. 09:49 < hprmbridge> nmz787> So the recurrence of embrittlement themed academic papers on hydrogen economy/transport/storage are misleading? 09:49 < hprmbridge> Perry> That’s fine, but then don’t develop opinions about what the models can and can’t do. 09:49 < hprmbridge> Perry> It’s fine to say “I can’t afford $20 to have access to an incredible technology” but not reasonable to say “I tried a much lamer version of this and it couldn’t do X.” 09:50 < hprmbridge> eleitl> I will continue to rely on publications who test the latest way more thoroughly that I ever could to form my opinions. Let's see what GPT-5 will bring, perhaps I'll buy a subscription then. 09:51 < nsh> ah, the historically problem-free approach of leaving the investigation of phenomena to other people 09:51 < nsh> :) 09:51 < hprmbridge> eleitl> You can certainly use a high enough pressure in high-carbon steel and/or at high temperatures to produce embrittlement. So I wouldn't. 09:52 < hprmbridge> Perry> “I understand from publications that ice cream tastes like fish.” 09:52 < hprmbridge> Perry> “I don’t need to ride in a car to know what they’re like, I can rely on scientific evaluations.” 09:52 < hprmbridge> Perry> FWIW, literally all of my close friends in CS are now using GPT-4 all day every day for their work. Most couldn’t imagine working without it any longer. 09:53 < hprmbridge> Perry> I’ve seen jokes about short outages shutting down silicon valley. 09:53 < hprmbridge> eleitl> @nsh I don't know enough networking like people on /r/networking to evaluate what GPT-4 tells me. There are obviously other speciality domains where that applies. 09:53 < hprmbridge> Perry> Even my sysadmin friends use it all day long. One of them was telling me this morning how he had the thing create a whole cluster for him with terraform. 09:54 < nsh> there are also time-honoured solutions to not having applicable domain knowledge :) 09:54 < hprmbridge> Perry> “I will wait until this indoor heating thing is much more well attested in the literature” is one response though I suppose. 09:55 < hprmbridge> eleitl> I could see the point if I was in business consulting. I've actually asked -- that is GPT 3.5 -- two questions about RH Satellite which was wrong, because it doesn't know shit about the new versions and their limitations. 09:55 < hprmbridge> Perry> 3.5 is not 4. 09:56 < hprmbridge> Perry> And it is not only 12% worse than 4 or whatever you might think from the ratio of the version numbers. 09:56 < hprmbridge> eleitl> What is the cutoff for the data set training for GPT-4? Do you know? 09:56 < hprmbridge> Perry> but no, it is not a documentation search mechanism, and you should not use it as such. 09:57 < hprmbridge> Perry> I don’t know offhand but don’t expect it to recall documentation for you. It writes code beautifully though. 09:57 < hprmbridge> Perry> And you can of course feed it documentation. 09:57 < hprmbridge> Perry> But I get how the price of a decent lunch might put you off of trying to use it. 09:57 < hprmbridge> eleitl> I've asked it a specific question, to solve a particular Satellite problem. It would have worked in the prior versions, but not in the new versions. 09:58 < hprmbridge> Perry> I mean, here you are, working with a magical technology that can maybe 2x or 5x your programming productivity, but I get how $20 would be an insurmountable obstacle to access. 09:58 < hprmbridge> Perry> Plus why would you want to get early exposure to the thing? 09:58 < hprmbridge> Perry> Better to read third party reports. 09:59 < hprmbridge> eleitl> I don't have any programming productivity. And I can't feed it any code for security raisins. 09:59 < hprmbridge> Perry> I’m reminded of the line in the movie “Metropolitan” where someone reveals they’ve never actually read any Jane Austen but have formed a thorough opinion by reading critical essays. 10:00 < hprmbridge> Perry> Let me be blunt. Here you are, with possible access to the most transformative technology in history, and you are making excuses not to play with it. 10:00 < hprmbridge> eleitl> You would have a better selling point if you ran my one sentence query through GPT-4. That is something I can't read about. 10:01 < hprmbridge> Perry> Nope. Go off and do it on your own, or don’t. But don’t form an opinion until you’ve touched it with your own hands. 10:02 < hprmbridge> cpopell> and sales copy 10:02 < hprmbridge> eleitl> Yeah, it is very good at producing marketing material. Among other things. 10:02 < hprmbridge> cpopell> Pretty decent at explaining why I got a phrase wrong in duolingo 10:03 < hprmbridge> cpopell> but I don't trust it for stuff I can't easily verify 10:03 < hprmbridge> eleitl> How well can it translate complicated texts? 10:03 < hprmbridge> cpopell> I don't know enough of other languages to verify 10:04 < hprmbridge> nmz787> Is there a list of languages it supports? Chinese, Hindi, Bengali? 10:04 < hprmbridge> cpopell> A friend who spoke an obscure Yiddish dialect said it did decent 10:04 < hprmbridge> eleitl> That is interesting. 10:05 < hprmbridge> eleitl> Can you use GPT-4 via plugins from vim or vscodium? 10:05 < hprmbridge> cpopell> Not something I've tried 10:06 < hprmbridge> jasoncrawford> To clarify I am not asking about GPT-3.5 or -4 in general, which are super-impressive; I am talking about these “agent” demos people have done where they hook it up in a loop telling it to make plans and then execute on them. 10:06 < hprmbridge> cpopell> It reminds me of Oblivion's promised radiant ai 10:06 < hprmbridge> cpopell> hah 10:07 < hprmbridge> jasoncrawford> I have also personally experimented with these general kinds of techniques to get GPT-3.5 (via the ChatGPT interface) to create things that require longer arcs of coherence, like stories or tutorials, and so far I have failed 10:08 < hprmbridge> jasoncrawford> On the other hand, ChatGPT-3.5 is already better than a search engine for many queries that I have. It makes a great study companion 10:08 < hprmbridge> eleitl> Has GPT-4 a bigger token buffer than 3.5? 10:08 < hprmbridge> jasoncrawford> (I pay for ChatGPT-4 also but it's slower and rate limited so I often just use 3.5, which is the default, and it works pretty well) 10:09 < hprmbridge> jasoncrawford> there are APIs that definitely enable bigger windows, up to 32k tokens; I only have access to the 8k-token API right now, and I'm not sure what ChatGPT uses 10:09 < hprmbridge> eleitl> Right, I've just found a thread on Reddit about it. Thanks. 10:11 -!- flooded [~flooded@146.70.202.142] has joined #hplusroadmap 10:15 -!- _flood [~flooded@146.70.174.227] has quit [Ping timeout: 248 seconds] 10:15 < hprmbridge> Perry> FWIW, my strong opinion: I’ve wanted something like this for my whole life. Now that it exists, I’ll be damned if I cheap out on getting access over the price of a couple of burritos. 10:16 < hprmbridge> jasoncrawford> Agree, totally worth paying the $20 at least once to try it… (although even 3.5 is pretty damn impressive) 10:17 < hprmbridge> Perry> It increases my productivity by a lot more than $20/day. I would happily pay thousands a seat for a programming team to have unlimited access to the 32k model. $20/month for casual use is a crazy crazy bargain, which is probably why they are making money at such an insane rate. 10:18 < hprmbridge> Perry> It’s very very far from perfect. Has serious lacunae, can’t be fully trusted, etc. Still totally worth it. 10:18 < hprmbridge> Perry> Especially once you get good at prompt engineering, which takes considerable practice. I’ve seen people give up after asking it for something in a silly way and never trying to improve the way they asked. 10:19 < hprmbridge> Perry> Single biggest peeve right now: people doing blog and twitter posts saying “haha it’s so dumb!” and not even specifying what model they use, then it turns out it was 3.5 or even not OpenAI GPT at all but Claude or llama 7B or something. 10:20 < hprmbridge> Perry> The moral equivalent of “someone said you can haul tons of freight on an internal combustion driven vehicle. I borrowed a friend’s vespa to test this theory, and let me tell you, there’s absolutely no way you could haul a shipment of lumber with this thing.” 10:21 < hprmbridge> Perry> Any time someone says “I asked ChatGPT this” and doesn’t specify the model they selected, it’s trash. 10:24 < hprmbridge> eleitl> Ok, I guess I will have to try it. Right now 3.5 is making a hash of very simple tasks, let's see whether 4 can handle it. 10:25 < hprmbridge> Perry> it might not be able to. It’s dramatically better, but it’s still not a human being. 10:25 < hprmbridge> Perry> still totally worth the price of a couple of burritos a month. 10:26 -!- Guest33 [~Guest33@ool-4a59803c.dyn.optonline.net] has joined #hplusroadmap 10:26 -!- Guest33 [~Guest33@ool-4a59803c.dyn.optonline.net] has quit [Client Quit] 10:43 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 276 seconds] 10:46 -!- gptbot1337 [~x@3392BD97.skybroadband.com] has joined #hplusroadmap 10:49 < nsh> .gpt Seven consecutive odd numbers add up to 105. What is the largest of these numbers? 10:50 < gptbot1337> Let's call the smallest odd number "x." 10:50 < gptbot1337> Since there are seven consecutive odd numbers, we can write out the other six as: 10:50 < gptbot1337> x + 2 10:50 < gptbot1337> x + 4 10:50 < gptbot1337> x + 6 10:50 < gptbot1337> x + 8 10:50 < gptbot1337> x + 10 10:50 < gptbot1337> x + 12 10:50 < gptbot1337> To find the sum of these seven numbers, we add them all up: 10:50 < gptbot1337> x + (x+2) + (x+4) + (x+6) + (x+8) + (x+10) + (x+12) = 105 10:50 < gptbot1337> Simplifying this equation, we get: 10:50 < gptbot1337> 7x + 42 = 105 10:50 < gptbot1337> Subtracting 42 from both sides: 10:50 < gptbot1337> 7x = 63 10:50 < gptbot1337> Dividing by 7: 10:50 < gptbot1337> x = 9 10:50 < gptbot1337> So the smallest odd number is 9. To find the largest of these seven numbers, we just need to add 12 to it: 10:50 < gptbot1337> 9 + 12 = 21 10:50 < gptbot1337> Therefore, the largest of these seven consecutive odd numbers is 21. 10:50 -!- mode/#hplusroadmap [+o kanzure] by ChanServ 10:50 -!- gptbot1337 was kicked from #hplusroadmap by kanzure [gptbot1337] 10:50 <@kanzure> these messages should be sent to a pastebin or something, not to the channel 10:50 <@kanzure> maybe single line replies could be appropriate for the channel, like with a link to pastebin for the rest of the overflow response 10:51 -!- mode/#hplusroadmap [-o kanzure] by kanzure 10:51 < kanzure> i have written a few replies on the AMA (not done yet, will continue later): https://progressforum.org/posts/9nEEu94bnsvmg3v2t/ama-bryan-bishop-biohacker-and-founder-of-custodia-bank?commentId=GJRLceh9kbRmnMgkD 10:51 < nsh> consider writing one line replies and linking to a pastebin for the rest 10:52 < kanzure> for myself? i have considered this! i often do as much instead of pasting 20 lines of <30 character messages into the channel. 10:53 < nsh> on the AMA, obviously, and in jest 10:53 * nsh sighs 10:53 < kanzure> i was trying to do jest but the admin deleted the "what is your BMI" question and now i can't answer "over 9000" 10:53 < hprmbridge> Eli> when you talk about using GPT4, are you mostly talking about the gains from copilot? That's pretty mindblowing stuff. But, beyond copilot, what workflows are you seeing for your day-to-day that is so valuable? I've used it to ask some academic questions, but GPT hallucinates, so you have to be careful about relying on it. 10:59 < hprmbridge> Eli> Like the ability to destroy every human being in LeetCode is amazing. And DALLE is completely disruptive. But what use cases are people using for ChatGPT, ie, stubborn GPT3? 11:01 < hprmbridge> gwillen> I don't think copilot is even on GPT-4 yet. Last I was aware, it was using a derivative of the GPT-3 "codex" models. 11:02 < hprmbridge> gwillen> I tried it recently and it still seemed dumber than 4. For coding, I use either ChatGPT Plus with GPT-4, or my own web UI to the GPT-4 API. 11:04 < hprmbridge> gwillen> I have gotten GPT-4 to explain advanced math to me (I started working through group representation theory, which I never managed to understand before.) Despite occasional hallucinations I think it's just massively better than anything short of 1:1 expert human tutoring. You have to use it carefully -- if it's teaching you something that you don't already know, you have to be the sort of person 11:04 < hprmbridge> gwillen> who can spot errors in unfamiliar material reliably enough to double check. I'm hoping to improve on that. 11:08 < hprmbridge> Eli> thats surprising, since my experience with copilot is that it's basically god-like. I guess GPT4 allows you to ask more general questions so you are even more abstracted? I've always felt like copilot was just a modern day interpreter for the english language-to-code. 11:09 < hprmbridge> Eli> What the kids do in school now is just use YouTube, along with Chegg/Numerade, and Symbolab to get through complex STEM. I would be surprised if GPT4 is yet able to do my calc physics. Although, I wouldn't be surprised if GPT5/6 gets there. 11:09 < hprmbridge> gwillen> It's possible I'm mistaken and they did upgrade it to GPT-4. I didn't experiment very extensively with copilot recently. 11:10 < hprmbridge> gwillen> What more specifically do you mean by "calc physics"? I would expect it to handle any common undergraduate course with little difficulty. 11:10 < hprmbridge> Eli> calculus based physics/engineering physics. 11:11 < hprmbridge> gwillen> I wouldn't expect that to be hard for it, you should try. 11:12 < hprmbridge> gwillen> (I mean, I would expect it to be good at explaining the concepts. If you give it hard problems that require calculation, raw GPT may have trouble, without careful chain-of-thought prompting, even GPT-4.) 11:13 < hprmbridge> eleitl> IIRC there's an interface to WolframAlpha in the works, so things should improve in math department quite a bit. 11:13 < hprmbridge> Eli> yeah something like this. It would require critical reasoning, an ability to correctly interpret the image, and a perfect ability to do the calculus: https://cdn.discordapp.com/attachments/1064664282450628710/1095773807622496326/image.png 11:15 < hprmbridge> Eli> I just don't know how an AI could possibly do that at this point? And, even if you get everything else correct, I've seen systems like symbolab screw up specific calculus problems. 11:28 -!- codaraxis__ [~codaraxis@user/codaraxis] has joined #hplusroadmap 11:32 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 248 seconds] 11:37 < kanzure> https://stephenmalina.com/post/2023-01-11-viriditas-dialogue/ 11:53 < docl> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6187506/ 11:54 < docl> "Open Design 3D-Printable Adjustable Micropipette that Meets the ISO Standard for Accuracy" 11:54 < hprmbridge> potatodemon> LLM with a permissive license https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm 11:58 < juri_> awesome. 12:03 -!- CUDDLERANGE [~Malvolio@idlerpg/player/Malvolio] has quit [] 12:17 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 12:32 < hprmbridge> Perry> CoPilot X. But you don’t need to use CoPilot, you can just do things like pasting functions in to the interface and asking it to hand you a translation into another language, or give it a spec and ask it to generate code. 12:47 < hprmbridge> gwillen> Yeah, that's what I've been doing. I'm on the waitlist for copilot X. 13:07 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 13:20 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 13:23 -!- flooded [~flooded@146.70.202.142] has quit [Ping timeout: 250 seconds] 14:44 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 14:44 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 16:13 < muurkha> fenn: 0-17ms doesn't sound like e-ink. what gives? 16:29 -!- flooded [~flooded@149.102.226.226] has joined #hplusroadmap 16:33 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 248 seconds] 16:40 < fenn> muurkha: sorry, that's the delay due to the controller chip 16:41 < fenn> if you watch the demo video it scrolls a page of text seemingly instantly 16:56 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Ping timeout: 248 seconds] 16:57 -!- Jay_Dugger [~jwd@47-185-225-108.dlls.tx.frontiernet.net] has joined #hplusroadmap 16:58 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 248 seconds] 18:58 < kanzure> https://thehub.ca/2023-04-11/make-the-future-bright-again-jason-crawford-on-the-importance-of-building-a-new-philosophy-of-progress/ 19:01 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 19:38 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 19:42 -!- flooded [~flooded@149.102.226.226] has quit [Ping timeout: 248 seconds] 20:00 -!- Jay_Dugger [~jwd@47-185-225-108.dlls.tx.frontiernet.net] has quit [Ping timeout: 276 seconds] 20:07 < hprmbridge> an1lam> Repost? 20:39 < hprmbridge> potatodemon> Fun brain chip images from Dalle2 https://reddit.com/r/dalle2/comments/12jq5ny/new_machine_learning_acceleration_units_unveiled/ 20:40 < hprmbridge> potatodemon> For discord people https://cdn.discordapp.com/attachments/1064664282450628710/1095916361030582292/image0.jpg 21:51 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 246 seconds] 21:52 < fenn> taek42: these are some example time varying curves used as basis vectors to communicate emotion https://2.bp.blogspot.com/-9dxmlUCQx5k/T_aqNinREVI/AAAAAAAAAMg/YpVkecN1jFA/s1600/senticgraph.jpg and the explanatory paper to go with them https://web.archive.org/web/20111205044422/http://www.microsoundmusic.com/papers.html?bpid=8875#NOTE2 (manfred clynes' sentics) 22:00 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 22:02 -!- codaraxis__ [~codaraxis@user/codaraxis] has joined #hplusroadmap 22:05 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Remote host closed the connection] 22:06 -!- codaraxis__ [~codaraxis@user/codaraxis] has joined #hplusroadmap 22:06 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 248 seconds] 22:06 < fenn> 'A human tends to be very quickly seduced to project an entity into a machine that communicates in this way. One needs in fact to develop a resistance to avoid being "sucked in". Continuing and varied interaction between man and machine with truly expressive gestures and tones of voice on the part of the machine, or of genuine musical expressiveness, make it mountingly difficult to maintain one's 22:07 < fenn> awareness that it is "only a machine".' 22:41 < fenn> oh whoops they aren't basis vectors because you don't average them together, but rather splice them together 23:15 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Ping timeout: 250 seconds] --- Log closed Thu Apr 13 00:00:31 2023