--- Log opened Sat May 27 00:00:13 2023 00:24 -!- cthlolo [~lorogue@77.33.23.154] has joined #hplusroadmap 00:32 -!- cthlolo [~lorogue@77.33.23.154] has quit [Read error: Connection reset by peer] 00:50 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 01:23 < alethkit> The NPC glasses! 02:08 -!- ebo011[m] [~ebo011mat@2001:470:69fc:105::3:4dec] has quit [Remote host closed the connection] 02:11 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 02:11 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 02:16 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 02:18 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 02:20 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 240 seconds] 03:10 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 03:13 -!- Betawolf_ is now known as Betawolf 05:10 -!- Jay_Dugger [~jwd@47.189.8.217] has joined #hplusroadmap 05:10 < Jay_Dugger> Hello, everyone. 05:10 < hprmbridge> kanzure> hi 05:18 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has quit [Ping timeout: 256 seconds] 05:19 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap 05:26 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 05:27 -!- Jay_Dugger [~jwd@47.189.8.217] has quit [Quit: Leaving] 05:29 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 246 seconds] 06:09 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 06:09 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 07:10 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 07:10 -!- stipa [~stipa@user/stipa] has quit [Read error: Connection reset by peer] 07:10 -!- stipa_ is now known as stipa 08:35 -!- flooded [~flooded@146.70.195.99] has joined #hplusroadmap 08:39 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 248 seconds] 09:34 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 10:07 -!- Urchin[emacs] [~user@user/urchin] has quit [Ping timeout: 240 seconds] 10:11 -!- Urchin[emacs] [~user@user/urchin] has joined #hplusroadmap 10:12 -!- Guest68 [~Guest68@177.125.187.69] has joined #hplusroadmap 10:12 < Guest68> what is this? 10:14 -!- Guest68 [~Guest68@177.125.187.69] has quit [Client Quit] 11:13 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 13:10 -!- Urchin[emacs] [~user@user/urchin] has quit [Remote host closed the connection] 13:34 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 14:53 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 14:57 -!- flooded [~flooded@146.70.195.99] has quit [Ping timeout: 250 seconds] 15:32 -!- juri_ [~juri@84-19-175-187.pool.ovpn.com] has quit [Ping timeout: 250 seconds] 15:48 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 16:20 -!- juri_ [~juri@84-19-175-187.pool.ovpn.com] has joined #hplusroadmap 16:28 < hprmbridge> nmz787> Murrkha: gpt4 says "Mixed-integer linear programming (MILP) is a specific class of optimization problems that includes both continuous and integer variables. The CP-SAT solver in Google's OR-Tools that you're currently using is, in fact, a type of MILP solver. It uses a combination of techniques from constraint programming (CP) and linear programming (LP) to find optimal solutions." 16:28 < hprmbridge> nmz787> Muurkha^ 17:11 -!- gwillen [gwillen@user/gwillen] has quit [Ping timeout: 264 seconds] 17:12 -!- gwillen [gwillen@user/gwillen] has joined #hplusroadmap 17:13 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 17:14 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 17:59 < muurkha> nmz787__: that is entirely correct as far as I know, though I haven't heard of CP-SAT 18:37 -!- helleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has joined #hplusroadmap 18:38 -!- hellleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has quit [Ping timeout: 240 seconds] 19:40 < fenn> "AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you." 19:56 < jrayhawk> what 19:59 < jrayhawk> out-of-distribution mesa-optimizer misalignment is not "inacessible to experiment"; it is the everyday struggle of everyone interacting with every gradiant descent system 19:59 < jrayhawk> what sort of clown wrote that 20:11 < jrayhawk> even the term "mesa-optimizer" came from alignment risk researchers, and is an incredibly broadly applicable concept 20:12 -!- gwillen [gwillen@user/gwillen] has quit [Ping timeout: 256 seconds] 20:12 -!- gwillen [gwillen@user/gwillen] has joined #hplusroadmap 20:24 < hprmbridge> kanzure> human codon recoding but you do it differently for different subsets of the population so that viruses can't be recoded to attack everyone at once 20:38 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 240 seconds] 20:57 < muurkha> well, if the alignment risk researchers are right, the first experiment with AI risk is almost certain to be the last 20:58 < muurkha> and also not feasible given our current technology 21:26 -!- Urchin[emacs] [~user@user/urchin] has joined #hplusroadmap 21:34 < hprmbridge> kanzure> ultrasound induced torpor https://www.nature.com/articles/s42255-023-00804-z 21:35 < hprmbridge> kanzure> what's the name of the gas perfusion cryonics startup that tonya jones is currently doing 21:38 < muurkha> (also maciej gave this talk in 02016 when it was much farther from feasibility) 21:43 < fenn> ^keinice bio 21:46 < fenn> "mesa-optimizer misalignment" has been a thing since the 1980s 21:46 < muurkha> no, "mesa-optimizer" is a malapropism from the late 02010s 21:47 < fenn> whatever, i hate the jargon 21:47 < muurkha> or solecism 21:47 < fenn> evolved formal systems doing unexpected things 21:47 < muurkha> oh, sure 21:48 < fenn> what is "mesa-" anyway 21:48 < fenn> it's a second order effect 21:49 < muurkha> an error from someone wanting to pretend they knew Latin 21:49 < muurkha> or Greek I guess 21:49 < fenn> greek 21:49 < muurkha> .t https://arxiv.org/abs/1803.03453 21:49 < EmmyNoether> [1803.03453] The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities 21:49 < muurkha> is that paper what you mean? 21:50 < fenn> i object to jrayhawk equating this sort of bug with millenniarist doom prophecies of AI foom apocalypse 21:51 < fenn> millenniariaristical* 21:51 < jrayhawk> gesundheit 21:59 < jrayhawk> if you think inner alignment problems started in the 1980s, the whole of evolution would like a word with you 22:01 < muurkha> well, fenn did say *formal* systems 22:01 < muurkha> possibly that excludes bags of enzymes 22:01 < fenn> i'm thinking of the anecdotes about artificial life simulations 22:02 < fenn> indeed human instinctual drives being suboptimal for reproduction is often used to explain the concept 22:03 < muurkha> or for survival 22:04 < jrayhawk> imagine if the fields of deontology and virtues ethicism had that concept available to them 22:04 < jrayhawk> so much argumentation would've been spared 22:05 < fenn> is a brain a mesa-optimizer? asking for a friend 22:05 < fenn> gpt-4 says yes 22:06 < jrayhawk> depends on what you conceptualize as the base 22:08 < fenn> "Evolution optimizes for reproductive success, while the brain, as a byproduct, optimizes for various objectives like problem-solving, learning, and decision-making." ... and i would add, eating tasty food, lusting after healthy bodies, playing status games, and so on. heuristics that are useful for reproduction but not directly reproductive. to the detriment of reproduction sometimes, for 22:08 < fenn> example when women pursue careers and status, then delaying having children until it's too late 22:09 < jrayhawk> arguably transhumanism is a philosophy about the (previously-)mesa-optimizer of agent-oriented general induction outstripping the (previously-)base optimizer of evolution 22:09 < jrayhawk> to the point where evolution is almost meaningless in comparison 22:09 < fenn> they're both in play 22:10 < fenn> if all transhumanists die in a freak memetic accident, their evolutionary fitness goes down 22:11 < fenn> to zero 22:11 < fenn> the meme may survive in books and media 22:11 < fenn> is the meme an agent? 22:13 < jrayhawk> is the meme capable of defecting in iterated prisoner's dilemmas 22:14 < fenn> yes 22:14 * fenn proceeds to rationalize the random coin flip 22:14 < jrayhawk> must be a damned sophisticated meme 22:16 < fenn> "I am sending you out like sheep among wolves. Therefore be as shrewd as snakes and as innocent as doves." 22:18 < fenn> in contrast, transhumanism is about giving up that innocence, realizing you're made of meat and the meat can be changed to do the bidding of an agent. the whole mess gets twisted up in meta-ethics until nothing is recognizable anymore 22:20 < fenn> uh, "philosophy of mind" apparently, not meta-ethics 22:21 < fenn> if even christian memes can be agentic, transhumanist memes ought to be at least twice as much so 22:21 < fenn> mister president, we cannot let there be an agency gap! 22:23 < fenn> i doubt the fields of deontology or virtues ethics were formulated with the caveat that the rules or virtues themselves could become alive 22:24 < fenn> that would have seemed like a step back toward animism 22:27 < jrayhawk> i lost track of your intuitions at the point at which memes were agentic 22:27 < jrayhawk> should exert effort on this 22:32 < fenn> i got distracted arguing about kant with chatgpt. i ought to know better 22:32 < jrayhawk> who won 22:32 < fenn> it was a stalemate 22:33 < fenn> say all transhumanists die because we make a cult that is just so attractive to the mindset, yet the memes live on in the books 22:33 < muurkha> how many worshippers does Amon-Ra have today? 22:34 < fenn> most books are burned, and in the ensuing dark ages the deontologists or whoever live out their sad short ignorant lives, but a few lucky copies survive in the landfills to be unearthed after the new renaissance 22:35 < fenn> the meme re-creates itself, due in no small part by having encoded itself redundantly as part of its, uh, meme-ness. 22:36 < fenn> (nobody widely duplicates memes that present themselves as unimportant, so the meme presenting itself as an important concept is responsible for its survival) 22:36 < fenn> the meme is competing with other memes for attention and book pages 22:37 < fenn> by sharing book space and mind space with other memes it can mutate and become more fit through recombination 22:38 < fenn> this is analogous to cooperating in a prisoner's dilemma 22:38 < fenn> by asserting it is the one true meme yadda yadda it gains more attention and mindshare, and this is analogous to defecting 22:39 < jrayhawk> i can't make that leap 22:40 < fenn> the iterated prisoner's dilemma is just a consequence of time passing and memes changing as the minds that they inhabit are exposed to new memes (state), and the evolution as memes are transmitted from mind to mind changes their makeup (genome) 22:41 < fenn> i think the melatonin is kicking in :\ 22:41 < muurkha> sleep well 22:41 < fenn> not at all 22:42 < fenn> i have this stupid schedule that is no good but i'm also not quite willing to do the even worse schedule to loop around and fix it (fix it temporarily anyway) 22:42 < fenn> anyway, sorry if this seems like rambling, but i'm pretty sure there's a real thought here 22:44 < fenn> jrayhawk's "agent-oriented general induction" has operated so far in the realm of memes, and transhumanism predicts it will soon operate in the realm of non-reproducing information (instrumental goals of artificial intelligence) 22:44 < fenn> does that sound right? 22:47 < muurkha> can't make heads or tails of it 22:50 < fenn> if it's useful for a paperclip optimizer to harvest human blood for iron, because it can make more paperclips that way, it won't spread the blood lust to other AIs that are optimizing for production of rubber duckies, because that blood lust won't be useful to them 22:51 < muurkha> but that's a non-convergent instrumental goal 22:51 < fenn> if all AIs were paperclip maximizers, it would be? 22:51 < fenn> humans are all pretty much the same, for our purposes here 22:52 < fenn> AIs may or may not share values ("what is thalience" and similar questions) 22:53 < fenn> (from "ventus" by karl schroeder) 22:53 < muurkha> why might clippy and ducky have conversations at all? 22:53 < muurkha> perhaps to negotiate, or perhaps to learn from one another? 22:54 < muurkha> in the first case, ducky might agree to harvest blood to get something in exchange from clippy 22:54 < fenn> next you will be lecturing me about the categorical imperative, and failing to give an example of a universally applicable norm 22:55 < fenn> how did "we don't trade with ants" get resolved? 22:55 < muurkha> in the second case, their debate might lead them to the conclusion that control over more matter is a convergent instrumental goal 22:56 < muurkha> I don't think we can predict these things 22:57 < fenn> the AIs can choose to cooperate or defect 22:59 < fenn> the fact that they can produce a combined paperclip ducky factory and share factory optimization code doesn't mean that the meme is generally applicable 23:00 < fenn> in fact it's probably a poor replicator because it only applies to the two AIs in particular 23:00 < muurkha> which one, blooodlust? 23:00 < fenn> the combined paperclip ducky factory meme 23:00 < fenn> or whatever sort of trade they negotiate 23:00 < jrayhawk> re: transhumanism: essentially all new traits expanding the scope of resource control are sourced from general induction ("the brain") rather than mutation/splicing/CNV and survivorship (evolution) and now the expansion of general induction is functionally meta to gene proliferation. 23:00 < fenn> the bloodlust is even worse at replicating, it doesn't even infect the ducky maximizer 23:01 < jrayhawk> while this does extend into AI, it also predates AI by quite a while 23:02 < fenn> isn't that just "humanism" 23:02 < fenn> er... civilization? 23:02 < jrayhawk> that remains to be seen 23:04 < jrayhawk> I think the two (the brain and evolution) have been functionally divorced for a while 23:05 < jrayhawk> transhumanism is establishing a hierarchy again 23:05 < fenn> and yet rome fell 23:05 < jrayhawk> just the opposite of the hierachy we had before 23:05 < fenn> please describe this hierarchy in more concrete terms 23:06 < fenn> i get very suspicious when people start abstracting away hardware and pretending that it doesn't exist 23:07 < fenn> if the hardware fails, all your precious memes go into the compost heap 23:07 < fenn> "the machine stops" 23:12 < fenn> we're all just replicators: AI, humans, memes 23:14 < fenn> it's a big rock paper scissors game where the rules are changing. genetics used to be fixed but now it's malleable and subject to memes. intelligence used to be fixed and now it's malleable and subject to code memes (?) 23:15 < fenn> genetics predisposes people made of that to certain memes... 23:16 < fenn> i don't see any hierarchy, just a big mess 23:18 < fenn> and to complete the loop, there are so many humans that a widespread failure in the technological foundation of the global economy will lead to massive population decrease, so hardware depends on software and vice versa. 23:19 < fenn> if we had a seed factory it might balance things out in the memes' favor. humans are the only bootstrap capable replicators at the moment 23:28 < jrayhawk> I think as concrete as I can make it: almost all new traits expanding the scope of resource control (some subset of which will be genetic), will be sourced from general induction engines at timescales a tiny fraction of genetic survivorship. This is in contrast to the distant past, when almost all new traits expanding the scope of resource control (some of which improvements to general induction) 23:28 < jrayhawk> were sourced from survivorship in mostly-randomized mutation/splicing/CNV, and the recent past, where there was still some disjunctive union worth conceptualizing. 23:38 < jrayhawk> if you want to make the claim that evolution is a comically slow general induction engine, i guess i will accept that 23:42 < jrayhawk> but yeah, i think that's the answer: the brain *was* unambiguously mesa, and *is becoming* unambiguously meta. 23:47 < fenn> "a mesa-optimizer emerges unintentionally as a byproduct of another optimization process, while a 23:47 < fenn> meta-optimizer is purposefully designed to optimize other optimizers." 23:47 < fenn> is that what you meant? 23:48 < fenn> the rationalists are seriously discussing genetic enhancement and embryo selection in order to do better AI alignment research... 23:50 < jrayhawk> oh good, so they've moved on from despair, then 23:50 < fenn> no this goes way back 23:52 < fenn> "in order to do" might be presumptive of me. intelligence is generally useful, just as money is generally useful. rationalists beg you to donate to AI alignment research, so it follows they'd beg you to donate your children's brains to AI alignment research 23:52 < fenn> living brains, in case that weren't clear 23:57 < fenn> it's interesting that expressed genes are subject to more mutation than silent genes, and epigenetics is subject to feedback from environmental stresses, so lamarckianism is kinda true. it's just one way that evolution develops new capabilities faster than random chance 23:58 < fenn> lamarckism* (looks bad) 23:58 < jrayhawk> hah, finally, after a decade 23:59 < fenn> did i disagree earlier? 23:59 < jrayhawk> yes --- Log closed Sun May 28 00:00:13 2023