--- Log opened Wed Mar 29 00:00:16 2023 00:17 -!- Llamamoe [~Llamamoe@46.204.68.80.nat.umts.dynamic.t-mobile.pl] has joined #hplusroadmap 01:44 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 252 seconds] 02:44 -!- test_ [~flooded@146.70.202.110] has joined #hplusroadmap 02:48 -!- flooded [~flooded@146.70.202.99] has quit [Ping timeout: 252 seconds] 03:47 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 04:10 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 05:19 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 05:38 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 05:41 -!- test_ [~flooded@146.70.202.110] has quit [Ping timeout: 246 seconds] 06:27 -!- yashgaroth [~ffffffff@c-73-147-55-120.hsd1.va.comcast.net] has joined #hplusroadmap 07:02 -!- flooded is now known as _flood 07:02 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 07:29 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Read error: Connection reset by peer] 09:25 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 09:28 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 09:35 < kanzure> /wiki/transcripts/silicon-salon/ is missing the bunnie haung talk on infrared self-validation of circuits and chips. someone remind me to upload it. 09:40 < L29Ah> > it still has the same legal problem of using openAI for creating new language models, which is against their terms of service 09:40 < L29Ah> i'm eagerly waiting for the legal impact of stuff trained on CC-SA and GPL-copyrighted content 09:41 < L29Ah> llama is arguably free to distribute since it includes modified CC-SA text (wikipedia) 09:42 < Llamamoe> The irony of them not wanting people to train their models on ChatGPT when ChatGPT was trained on material taken without permission lmao 09:52 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Remote host closed the connection] 09:53 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 09:57 < hprmbridge> nmz787> I asked it for a list of rectangle coordinates, so it replied with text 10:00 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 10:28 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 248 seconds] 10:55 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 13:07 < hprmbridge> Perry> Is the -4 API available? 13:09 < redlegion> API only, looks like they're not allowing the model to be let loose. 13:15 < hprmbridge> Perry> I don't understand that concept. GPT-4 access is already there for anyone who pays for it. But the API isn't generally available yet. The models themselves have never been made available. 14:06 < hprmbridge> lachlan> Just apply for access, I got it within a day or two 14:07 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 14:10 -!- _flood [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 250 seconds] 14:32 -!- catalase_ [catalase@freebnc.bnc4you.xyz] has quit [Remote host closed the connection] 14:32 -!- lkcl [lkcl@freebnc.bnc4you.xyz] has quit [Read error: Connection reset by peer] 14:36 -!- flooded is now known as _flood 14:42 -!- catalase [catalase@freebnc.bnc4you.xyz] has joined #hplusroadmap 14:43 -!- lkcl [lkcl@freebnc.bnc4you.xyz] has joined #hplusroadmap 14:46 < pasky> we are living in interesting times https://twitter.com/mark_riedl/status/1637986261859442688?s=20 14:52 < pasky> (BTW ChatGPT suggesting Rossum [pure luck] is almost becoming a legit inbound demandgen channel for us...) 15:06 -!- flooded [~flooded@146.70.202.99] has joined #hplusroadmap 15:09 -!- _flood [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 252 seconds] 15:36 -!- Llamamoe [~Llamamoe@46.204.68.80.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 15:48 < hprmbridge> JS> Nice to meet everyone. 16:06 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 276 seconds] 17:04 -!- RespiLateR is now known as LUXMINUT 18:53 -!- yashgaroth [~ffffffff@c-73-147-55-120.hsd1.va.comcast.net] has quit [Quit: Leaving] 19:09 < hprmbridge> Perry> https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all 19:11 < hprmbridge> Perry> Eliezer goes off the deep end. It’s the logical outcome of developing a profound and completely fanatical belief system. As a friend of mine who has a serious interest in cults notes, the important part of a cult isn’t whether it’s beliefs have some connection to reality but that they are held to be more important than any other consideration whatsoever. 19:27 < fenn> it's about as likely to happen as unilateral nuclear disarmament 19:28 < fenn> (no way in hell) 19:36 < muurkha> Hmm, if his beliefs are correct, they *are* more important than, if not any other consideration whatsoever, at least any other consideration whose importance is limited to the time when humans exist. 19:38 < muurkha> For example, they'd clearly be more important than his example of preventing a full nuclear exchange. 19:38 < muurkha> But as fenn says, there's no way in hell he'll get his wish 19:47 < fenn> if AI is outlawed, only criminals will have AI, etc. etc. 19:48 < muurkha> right 19:48 < fenn> a global draconian surveillance apparatus to track computing resources is not effective in the face of algorithmic improvements, which can be several orders of magnitude in one breakthrough 19:48 < muurkha> good point 19:49 < fenn> criminals are already in control of enormous computing pools called botnets, but they are too stupid to use them for anything important 19:49 < fenn> if you drive legitimate AI research into a criminal underground, you're almost certainly going to get worse outcomes than if it happens in public 19:49 < muurkha> also, they are in control of the Utah Data Center 19:50 < muurkha> and those particular criminals aren't stupid 19:51 < fenn> the point being, you don't need to own computing infrastructure in order to make use of it 19:52 < muurkha> I'm not sure that's true in practice for training large deep learning models 19:52 < muurkha> I think big GPU training clusters kind of have to be purpose-built at this point 19:53 < muurkha> though, as you point out, that could change due to algorithmic improvements 19:55 < muurkha> it's at least plausible that TSMC could slow progress dramatically for a while by not making any more GPUs, but that would involve a change of management 19:56 < fenn> https://github.com/BlinkDL/RWKV-LM for example is using an RNN to (aspirationally) achieve the same function as a transformer model, but with computations happening locally instead of requiring a tightly interconnected system 19:56 < fenn> there isn't the political will to make TSMC commit suicide 19:57 < muurkha> well, probably the PRC/ROC conflict will shut it down at some point 19:57 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 265 seconds] 19:57 < fenn> they are building a fab in USA too 19:57 < fenn> there are lots of other fabs in the world 19:59 < muurkha> yeah, eventually fabrication capabilities would probably recover 19:59 < muurkha> I mean the US did eventually regain manned spaceflight capability, even if supersonic transports and moon landings are now science fiction again 20:00 < muurkha> but I think it will slow progress dramatically for a while 20:01 < fenn> the training time on these special purpose clusters is not very long, a few weeks usually 20:02 < fenn> those clusters will continue to exist 20:03 < fenn> what is even the scenario that plausibly results in a global halt to compute-intense AI research 20:03 < fenn> we can't even define "AI" 20:04 < fenn> nuclear war might do it, but i'm sure most would not advocate that route 20:05 < fenn> all it takes is one saudi prince to defect and be the new AI leader 20:05 < fenn> now you've just handed control of civilization over to the defector 20:05 < fenn> congrats 20:09 < docl> "And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too." 20:12 < docl> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ 20:13 < muurkha> well, if Eliezer is correct, it's not to the defector but to the AI 20:13 < muurkha> if he's incorrect, it might not be to the defector either, because being the new AI leader might not grant you control of civilization 20:13 < muurkha> but it might! 20:16 < fenn> i guess we should have a link here to the more mainstream policy intervention news https://futureoflife.org/open-letter/pause-giant-ai-experiments/ 20:18 < fenn> if they want this to work it should be worded as a mutual assurance pledge, we need this many people to pledge not to develop AI (all of them) 20:19 < muurkha> yeah, that seems reasonable 20:19 < muurkha> except we only need to include orgs that plausibly could 20:19 < fenn> like kickstarter but more totalitarian 20:19 < muurkha> bruce511 points out https://news.ycombinator.com/user?id=bruce511 20:19 < fenn> .t 20:19 < EmmyNoether> Profile: bruce511 | Hacker News 20:20 < muurkha> oops 20:20 < muurkha> https://news.ycombinator.com/item?id=35367221 that AI is likely to be profitable enough to incentivize defection 20:20 < muurkha> in a way that nukes, leaded gas, CFCs, and even whaling are not 20:21 < fenn> well duh 20:23 < fenn> ignoring startups and clandestine research, there will be immense pressure to cheat and rules-lawyer so as to claim you aren't breaking the letter of the agreement, while breaking the spirit of the agreement 20:24 < fenn> a broad ban would cause tremendous economic damage, so nobody would agree to it 20:24 < docl> stopping all big players above a certain scale may be all that's needed to buy the time needed. smaller models are more likely to be narrow scope tool AI. less likely to emergently decrypt the secret sauce of human cognition 20:25 < fenn> docl how much time is "needed" and to achieve what exactly? the AI doomers have had decades and made zero progress, by their own admission 20:25 < docl> the small players are doing useful stuff like writing better software. 20:26 < docl> that in turn increases the chance of doom prevention being a viable thing to work on. 20:28 < fenn> it's the first future shock anybody has experienced this generation, understandably people want a chance to catch their breath. but i'm too cynical to believe that even in the face of a legitimate danger that legislators will be able to coordinate to make a sane response 20:29 < fenn> presumably people working in the AI labs themselves aren't experiencing future shock 20:31 < fenn> docl how is better software useful for doom prevention? 20:32 < fenn> docl you might find this relatable https://www.alignmentforum.org/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem 20:35 < docl> it's part of a general category. better data about what works and what doesn't, how to characterize user preferences, etc. in a sense, it is a form of increased intelligence to humanity. 20:40 < hprmbridge> Perry> Yudkowski says risking nuclear war is better than permitting AI research to continue in “rogue” data centers. 20:49 < docl> That's your reading comprehension failure, I think. 20:49 < docl> "Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some 20:49 < docl> risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs." 20:50 < docl> > preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange 20:50 < docl> are you suggesting it isn't? 21:00 < fenn> in that sentence, yudkowsky is comparing a nuclear exchange to a large AI training run 21:01 < fenn> it's not a reading comprehension failure, that's what he's saying 21:01 < hprmbridge> Rohan> Suddenly relevant again, 8 years after his Harry Potter fanfic ended, of course he has to milk it for everything with hot takes 21:02 < hprmbridge> glenn> hey less wrong was really good 21:03 < fenn> less wrong is better now than it used to be... 21:03 < docl> not taking that approach in diplomacy implies that Russia secretly performs large training runs. 21:03 < hprmbridge> Rohan> Anyway how's my loss curve https://cdn.discordapp.com/attachments/1064664282450628710/1090848707441860688/WB_Chart_3_29_2023_9_02_49_PM.png 21:03 < fenn> rohan why so zoomed in? 21:04 < hprmbridge> Rohan> I just reloaded from chkpt and wandb doesn't know it's the same run 21:04 < hprmbridge> Rohan> Hold on 21:06 < hprmbridge> Rohan> https://cdn.discordapp.com/attachments/1064664282450628710/1090849615370256394/WB_Chart_3_29_2023_9_06_33_PM.png 21:07 < hprmbridge> Rohan> My batches are highly variable in terms of unmasked sequence length 21:07 < hprmbridge> Rohan> Could be the spikiness 21:08 < hprmbridge> Rohan> Or I'm just new at this, lr 1e-5 btw 21:08 < hprmbridge> glenn> im just curious, do you think eliezer is wrong? 21:08 < hprmbridge> Rohan> I don't really care, he's a lolcow 21:17 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 21:24 -!- test_ [~flooded@146.70.174.211] has joined #hplusroadmap 21:28 -!- flooded [~flooded@146.70.202.99] has quit [Ping timeout: 265 seconds] 22:05 < fenn> 'I had a professor who was involved in that kind of research at the time of the Asilomar Conference. He said it was all very good, everyone agreed to pause their research until people established guidelines for safety. And then once the guidelines were established and research was allowed to resume, everyone immediately published all the research that they had never stopped working on during the 22:05 < fenn> "pause".' 23:08 -!- Gooberpatrol_66 [~Gooberpat@user/gooberpatrol66] has quit [Ping timeout: 252 seconds] --- Log closed Thu Mar 30 00:00:17 2023