--- Log opened Mon Nov 27 00:00:37 2023 00:06 < fenn> docl: i've got bad news for you; we're already in a hardware overhang scenario 00:06 < fenn> that's the "34 trillion liters of water" 00:11 < fenn> https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang 00:14 < fenn> i need a smart person to read these two papers and tell me if they're real, and if so explain how it works in a way i can understand https://arxiv.org/abs/2311.10770 https://arxiv.org/abs/2308.14711 00:14 < fenn> .t 00:14 < EmmyNoether> [2311.10770] Exponentially Faster Language Modelling 00:14 < fenn> because it would upset the apple cart quite a lot if true 00:17 < fenn> crack the dam is a better metaphor 00:37 < hprmbridge> kanzure> are you following wei dai's attempts to scold me 00:41 -!- Llamamoe [~Llamamoe@188.146.96.109] has joined #hplusroadmap 00:45 < fenn> yes 01:09 < hprmbridge> kanzure> "Whole body gestational donation" https://link.springer.com/article/10.1007/s11017-022-09599-8 01:16 < fenn> you can only have babies if you're dead 01:16 < fenn> it's only fair 01:18 < hprmbridge> kanzure> an eye for an eye.. makes the world, go round? 01:19 < fenn> can i get a clone of myself in my persistent vegetative body 01:19 < fenn> is that ethical? 01:20 < hprmbridge> kanzure> you wouldn't be able to parent little fenn because you're a veggie 01:21 < fenn> parents are overrated anyway 01:21 < hprmbridge> kanzure> elaborate on this plan 01:22 < fenn> well ethically speaking, a robot should do the parenting 01:22 < fenn> it would be unfair to force others to raise my replacement body 01:23 < fenn> the baby would be fed a steady diet of soylent and anime, until graduating to mind-distillation or synthetic data derived from my archives 01:24 < fenn> it's not like i'm planning to be in a persistent vegetative state in the first place 01:25 < fenn> should every human be required to maintain an 18 year stockpile of soylent? 01:25 < fenn> just in case? 01:27 < fenn> as for the biology, i guess we'd have to sprinkle some yamanaka factors and such to jump start gametogenesis in vitro 01:27 < fenn> surely all of that could be automated as well 01:28 < fenn> an abdominal cavity pregnancy draws much higher nutrient load from the host than a uterine pregnancy. it's unclear what effect this has on development 01:38 < fenn> heh "pregnancy ... should properly speaking be medically contra-indicated for women generally." 01:39 < fenn> worse than measles 01:42 < hprmbridge> kanzure> imagine requiring all employees to sit in a circle and say their most vulnerable thoughts for group debugging and alignment https://www.lesswrong.com/posts/aFyWFwGWBsP5DZbHF/circling 01:45 < fenn> yeah very clearly not professional conduct https://realsocialskills.org/2014/07/17/nonviolent-communication-can-be-emotionally-violent/ 01:46 < fenn> tumblr hysteria for what should have been common sense 01:48 < fenn> it's an excellent way to gather compromat 01:48 < hprmbridge> kanzure> need another link to describe how many red flags this should raise 01:48 < fenn> blackmail material 01:50 < fenn> ugh i already wasted enough time on this 01:52 < hprmbridge> kanzure> https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe 01:54 < hprmbridge> kanzure> "Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging 01:54 < hprmbridge> kanzure> with other superiors." 01:54 < hprmbridge> kanzure> ideological "debugging", beautiful 01:56 < hprmbridge> kanzure> https://www.greaterwrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe/comment/dLyEcki7dBdxFkvJd 02:16 < fenn> another practice i put in the same bin as "circling" is eugene gendlin's "focusing" 02:16 < fenn> i.e. i want nothing to do with it, but i don't know what it is or what it's for 02:42 -!- millefy [~Millefeui@anantes-651-1-211-3.w90-25.abo.wanadoo.fr] has quit [Ping timeout: 252 seconds] 04:01 < hprmbridge> nmz787> Fenn it sounds like your robot raising your clone and eventually feeding it your archived datalogs could make for a good sci-fi story. Where the clone realizes the error of it's original's ways, and [escapes or something else that figuratively throws off the shackles of their originator] 04:02 < fenn> well i've realized the error of my ways already 04:03 < hprmbridge> jay_dugger> Cf. Kate Wilhelm's "Where Late The Sweet Birds Sang" 04:04 < fenn> ah yes our resident sci-fi retrieval bot 04:05 < hprmbridge> jay_dugger> Back from the bed, yes. 04:05 < hprmbridge> jay_dugger> "dead", rather. 04:05 < fenn> causal LLMs seem to have trouble with this specific task, probably isomorphic to the A is B therefore B is A problem 04:09 -!- ike8 [e8f913dbdf@irc.cheogram.com] has quit [Ping timeout: 256 seconds] 04:30 < kanzure> hmph 04:33 < docl> what's bugging me is how all this bay area cult stuff is seemingly being used in place of argument. AI x-risk is real enough or not, but you can't just say "it can't be real because believing that would be a cult" 04:33 < docl> that's as bad as "it can't be real because it's in science fiction" 04:33 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 04:34 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 04:34 < kanzure> supreintelligence could do all sorts of things, including destroying ~everything. but you already knew that i agreed with this. 04:34 < docl> so what do you mean when you say x-risk isn't a real problem? 04:35 < kanzure> real problems are solvable, otherwise it's just apocalyptic doom stuff. superintelligence by definition outsmarts everything you have. 04:35 < kanzure> nah, let me think of a better answer 04:36 < docl> kind of a creative use of language :) 04:37 < docl> my take would be that you actually can solve it, it's just a lot of tedious work nobody yet really knows how to do. but there are some tedious blog posts on lesswrong you could use as a starting point. 04:43 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 04:43 < docl> I mean. you can have asymmetric cryprography, right? why could we not make systems that are way harder for ASI to break than they are to make 04:46 < muurkha> we don't know how to make things that are hard to break 04:46 < muurkha> we don't know if P = NP 04:47 < muurkha> one of the first efforts to build asymmetric cryptography was based on an NP-complete decision problem: https://en.wikipedia.org/wiki/Knapsack_problem#Computational_complexity 04:48 < kanzure> there are many things that could destroy us all, including humans (nobody has really tried) 04:48 < muurkha> but a polynomial-time attack was found only seven years later, by a mere human: https://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsack_cryptosystem#Cryptanalysis 04:52 < muurkha> this is described allegorically in http://web.archive.org/web/20011125184454/http://www.ai.mit.edu/people/shivers/autoweapons.html 04:52 < muurkha> the famous Graduate Student Guide to Automatic Weapons 04:53 < muurkha> > You’ve probably heard of Felton (National Academy of Science, IEEE Past President, NRA sustaining member). My advisor told me later that Felton’s academic peak had come at that now-infamous 1982 Symposium on Data Encryption, when he presented the plaintext of the encrypted challenge message that Rob Merkin had published earlier that year using his “phonebooth packing” trap-door algorithm. 04:53 < muurkha> According to my advisor, Felton wordlessly walked up to the chalkboard, wrote down the plaintext, cranked out the multiplies and modulus operations by hand, and wrote down the result, which was obviously identical to the encrypted text Merkin had published in CACM. Then, still without saying a word, he tossed the chalk over his shoulder, spun around, drew and put a 158grain semi-wadcutter right 04:53 < muurkha> between Merkin’s eyes. As the echoes from the shot reverberated through the room, he stood there, smoke drifting from the muzzle of his .357 Magnum, and uttered the first words of the entire presentation: “Any questions?” There was a moment of stunned silence, then the entire conference hall erupted in wild applause. God, I wish I’d been there. 04:57 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 04:57 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 05:27 -!- ike8 [e8f913dbdf@irc.cheogram.com] has joined #hplusroadmap 05:32 -!- ike8 [e8f913dbdf@irc.cheogram.com] has quit [Ping timeout: 260 seconds] 06:20 < docl> if we're relying on continued "nobody has really tried" which might end up being the only option for some time to come, can we at least devote some cycles to ensuring "nobody can do this by accident" remains the case consistently? 06:38 < muurkha> what? 06:40 < muurkha> make sure nobody can do what? use bay area cult stuff in place of argument? destroy everything? outsmart everything you have? solve the AI existential risk problem? use some tedious blog posts on lesswrong as a starting point? make systems that are hard to break? destroy us all? find a polynomial-time attack? execute fictional cryptographers at point-blank range? 06:41 < docl> ah, that was replying to kanzure 06:41 < muurkha> kanzure saying what? 06:41 < muurkha> those seem to be all the recent topics of discussion, and in my book they neatly divide into "things that would not be desirable to prevent people from doing" and "things that we have no idea how to prevent people from doing" 06:41 < docl> > there are many things that could destroy us all, including humans (nobody has really tried) 06:42 < muurkha> oh, that was the third question in my list, so you could have just said "destroy everything" 06:42 < docl> sorry for the confusion 06:42 < muurkha> oh well I guess "destroy us all" is later in the list but it's kind of the same thing 06:43 < muurkha> anyway, nobody has any idea how to ensure nobody can destroy us all by accident 06:43 < muurkha> if strong AI existential risk is real, the only way you could prevent it is to ensure that nobody can create a strong AI, because creating a strong AI is what might destroy us all by accident 06:44 < muurkha> but it's impossible to ensure that nobody can create a strong AI, as far as I can tell 06:44 < docl> sure nobody knows exactly, that's why I said we should devote some cycles to it. presumably people get a better idea of how it would be done when closer to the achievement 06:50 -!- A_Dragon is now known as Festive_Derg 06:55 < muurkha> basically you're describing the reasoning behind the formation of the Singularity Institute, MIRI, and Less Wrong 06:55 < muurkha> it didn't work 06:55 < hprmbridge> kanzure> yep 07:03 -!- Llamamoe [~Llamamoe@188.146.96.109] has quit [Quit: Leaving.] 07:04 < docl> you don't have to devote a whole institute, I said a few cycles. this can be a distributed effort. 07:10 < muurkha> as https://aiascendant.substack.com/p/extropias-children-chapter-5-irrationalism points out, a lot of the SingInst research was never published because Eliezer thought giving people a better idea of how to do strong AI would do more harm than good 07:10 < muurkha> which of course makes it impossible to tell whether it actually represented progress; certainly it doesn't seem to have been related to what's actually working today 07:29 -!- ike8 [e8f913dbdf@irc.cheogram.com] has joined #hplusroadmap 08:49 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Remote host closed the connection] 08:49 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 08:53 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Remote host closed the connection] 08:53 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 10:40 -!- ike8 [e8f913dbdf@irc.cheogram.com] has quit [Ping timeout: 240 seconds] 10:58 -!- ike8 [e8f913dbdf@irc.cheogram.com] has joined #hplusroadmap 11:46 < kanzure> free animations from reply guys now, that's cool https://twitter.com/BITCOINALLCAPS/status/1729176250630164537 11:48 -!- Mabel [~Malvolio@idlerpg/player/Malvolio] has quit [Killed (silver.libera.chat (Nickname regained by services))] 12:04 -!- Mabel [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 12:10 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 240 seconds] 12:26 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 12:40 < jrayhawk> awkwardly, half of the early bitcoiners were lesswrongers 12:41 < kanzure> nanotube :| :| 13:01 -!- ike8 [e8f913dbdf@irc.cheogram.com] has quit [Ping timeout: 255 seconds] 13:41 -!- mlaga97 [~quassel@user/mlaga97] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 13:58 -!- ike8 [e8f913dbdf@irc.cheogram.com] has joined #hplusroadmap 14:54 -!- ike8 [e8f913dbdf@irc.cheogram.com] has quit [Ping timeout: 245 seconds] 15:14 < hprmbridge> kanzure> lest we not forgot vitalik is one of them https://twitter.com/VitalikButerin/status/1729251808936362327 although less of an evangelist 15:45 < hprmbridge> kanzure> "so now that I'm out of binance, how's that whole anti-aging thing going?" https://twitter.com/cz_binance/status/1729277854431527163 15:47 -!- srk- [~sorki@user/srk] has joined #hplusroadmap 15:50 -!- srk [~sorki@user/srk] has quit [Ping timeout: 255 seconds] 15:50 -!- srk- is now known as srk 15:56 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Remote host closed the connection] 16:02 -!- srk- [~sorki@user/srk] has joined #hplusroadmap 16:05 -!- srk [~sorki@user/srk] has quit [Ping timeout: 255 seconds] 16:05 -!- srk- is now known as srk 16:11 -!- srk- [~sorki@user/srk] has joined #hplusroadmap 16:12 -!- srk| [~sorki@user/srk] has joined #hplusroadmap 16:15 -!- srk [~sorki@user/srk] has quit [Ping timeout: 268 seconds] 16:15 -!- srk| is now known as srk 16:16 -!- srk- [~sorki@user/srk] has quit [Ping timeout: 260 seconds] 16:51 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 255 seconds] 16:58 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 16:58 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 17:18 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 17:48 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 240 seconds] 17:49 -!- TMA [tma@twin.jikos.cz] has quit [Ping timeout: 256 seconds] 17:50 -!- Betawolf [~matthew@xn--bta-yla.net] has quit [Ping timeout: 264 seconds] 17:50 -!- Betawolf [~matthew@xn--bta-yla.net] has joined #hplusroadmap 17:50 -!- TMA [tma@twin.jikos.cz] has joined #hplusroadmap 21:07 -!- Guest79 [~Guest75@apn-37-248-213-246.dynamic.gprs.plus.pl] has joined #hplusroadmap 21:08 -!- Guest79 [~Guest75@apn-37-248-213-246.dynamic.gprs.plus.pl] has quit [Client Quit] --- Log closed Tue Nov 28 00:00:38 2023