--- Log opened Thu Jun 02 00:00:40 2022 01:28 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 01:47 -!- lkcl [~lkcl@92.40.174.171.threembb.co.uk] has quit [Quit: Leaving] 03:01 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has quit [Read error: Connection reset by peer] 03:06 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has joined #hplusroadmap 03:15 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has quit [Quit: Textual IRC Client: www.textualapp.com] 03:34 -!- adlai is now known as adlai|alphanumon 03:34 -!- adlai|alphanumon is now known as adlai|alphanum 03:35 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has joined #hplusroadmap 03:40 < kanzure> https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/ 03:41 < kanzure> "I believe the best choice is cloning. More specifically, cloning John von Neumann one million times." 03:42 < kanzure> "There are commercial dog- and horse-cloning operations today. We've even cloned primates. The costs, frankly, are completely trivial. A cloned horse costs $85k! Undoubtedly the first JvN would cost much more than that, but since we're making a million of them we can expect economies of scale. I bet we could do it for $200 billion, or less than half what the US has spent on the F35 thus far. ... 03:42 < kanzure> ...Compared to the predicted cost of training superhuman AGI, the cost of one million JvNs is at the extreme lower end of the AI forecasts, about 5 orders of magnitude above GPT-3." 03:43 < kanzure> "Assuming interests are highly heritable we should probably also re-create leading biologists, chemists, engineers, entrepreneurs, and so on. No bioethicists though." 03:59 * nsh strikes the Poe's Law gong 04:01 < nsh> .wik The Alignment Problem 04:01 < saxo> "The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particular machine learning systems, that are aligned [...]" - https://en.wikipedia.org/wiki/The_Alignment_Problem 04:02 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has quit [Read error: Connection reset by peer] 04:09 < kanzure> here is a short story about artificial wombs from someone (seems a little boring to me, but i don't think it's trying to be non-boring?) https://docs.google.com/document/d/10qaBFOWYVJPVWMq9rMJ3wFX3hDns7-_XHnlouj1S5so/edit 04:13 < kanzure> https://atis.substack.com/p/the-world-in-2072 "So what caused everyone to grow slightly richer? The answer is the boring one. Things gradually got better over time through new technologies (not one single earth-shattering technology) and innovations, and more people are well-educated enough to be very productive in the world economy. That’s it, the secret sauce is just education and ... 04:13 < kanzure> ...compounding returns over time. Again, many opinion pieces were written." 04:17 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has joined #hplusroadmap 04:18 < kanzure> https://ftxfuturefund.org/principles/ "Humanity’s future could be vast. Our species could survive for hundreds of millions of years, until the Earth is no longer habitable, or for even longer. Countless people might lead lives of flourishing or misery—or never live at all—depending on what humanity does today. We want to fund projects with clear answers to the question: “How will this ... 04:18 < kanzure> ...project improve humanity’s odds of surviving and flourishing for thousands of years or longer?”" 04:28 < nsh> if you boof enough of the longnowthink pipe you eventually go back to living in the moment and the species prospers by default 04:29 < nsh> trying to outsmart the universe that created you is an interesting journey but the destination is the same as the origin 04:29 < nsh> with respect to humanity extinction is a conceptual negative space 04:30 < nsh> the universe abhors the vacuum and it would take cosmic energies to prevent it humaning 04:30 < nsh> survivalists are just hobbyists whose horse is recreational worrying 04:31 < nsh> which is itself a feature of the aforementioned state of affairs 04:32 < nsh> the proposition most worthy of examination is that of whether the outcome being in doubt at all is actually essential to the process proceeding 05:40 < muurkha> most species are already extinct; survival is not the default 05:41 < muurkha> nmz787: "cheaper than gravel" sort of implies that you'd have to manufacture it in situ, because anything you hauled in from elsewhere would be more expensive than gravel 05:42 < muurkha> well, as expensive as gravel 05:49 -!- oxphi [~oxphi@107.181.189.44] has joined #hplusroadmap 06:06 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::93] has joined #hplusroadmap 06:11 < nsh> if a runner leaves a slipper behind, is there still a forward motion? 06:12 < nsh> or when a snake sheds its skin is there still a snake? 06:19 < kanzure> really unfortunate that all of this energy is focused on x-risk reduction and altruism instead of building and trade https://theprecipice.com/ https://www.cold-takes.com/most-important-century/ 06:21 < muurkha> I think there's already a lot more energy focused on building and trade than on x-risk reduction and altruism. off the cuff I'd guess about 2.5 orders of magnitude more 06:22 < muurkha> for building. 4 orders of magnitude more for trade 06:22 < kanzure> x-risk reduction isn't a productive pursuit because anyone who can successfully reduce x-risk can also successfully increase x-risk 06:23 < muurkha> your conclusion does not follow from its premise; its opposite does 06:23 < kanzure> there needs to be a positive sentiment of the future that is not rooted in altruism 06:23 < muurkha> loosely, at least 06:24 < kanzure> the "(somehow-)inarguably omnipotent" x-risk class is not worth our time arguing 06:24 < muurkha> trade is currently US$85 trillion per year 06:25 < kanzure> i assure you that most of that trade is not focused on million year deep human flourishing 06:25 < muurkha> assuredly true 06:26 < muurkha> but million-year-deep human flourishing is an altruistic aim unless you manage to live a million years yourself 06:26 < muurkha> and even then it's mostly altruistic 06:26 < kanzure> most people do not consider exerting influence over million year timescales to be altruistic 06:26 < kanzure> and hence why there are common law limitations on the lifespan of "indefinite" trusts thru inheritance 06:26 < muurkha> most people don't consider anything at all 06:27 < muurkha> altruism doesn't give you a blanket exemption from the common law, so your reasoning is invalid 06:30 < muurkha> the fact is that people exerting influence inevitably come into conflict with other people trying to exert influence, and when those people have been dead for generations, they are in a worse position to defend themselves, so they tend to lose political battles 06:30 * nsh smiles 06:32 < nsh> though oft the ghost a sharp riposte reveals 06:32 -!- lkcl- is now known as lkcl 06:35 < lkcl> saxo: the problem with AI is that the understanding of neurons is fundamentally f****d, and has been for 40+ years. 06:36 < muurkha> lkcl: heh, nice joke 06:36 < lkcl> it's best explained by my friend wai tsang, who has made a life-long study of the brain, with a view to implementing it. https://www.youtube.com/watch?v=Q7ThBK33Svk 06:36 < kanzure> there is some irony in arguing with a bot on the topic of artificial intelligence 06:36 < muurkha> #thatsthejoke 06:36 < kanzure> .title 06:36 < saxo> A new, very consequential & huge scientific idea. Like E=MC2 Life = Intelligence to True AI - YouTube 06:36 < lkcl> muurkha, hah, i forget that saxo's the bot in this channel :) 06:37 < nsh> .t 06:37 < saxo> A new, very consequential & huge scientific idea. Like E=MC2 Life = Intelligence to True AI - YouTube 06:37 < lkcl> because it's not named "bot" 06:37 < nsh> oopss soz 06:37 < lkcl> nsh, kanzure, thx :) 06:37 < kanzure> there needs to be more work on human brain simulation instead of emulation 06:38 < lkcl> https://youtu.be/Q7ThBK33Svk?t=913 for example 06:38 < kanzure> there are only so many high-level pathways of various cortex layers interconnecting, and they are somewhat common between mammalian brain architectures 06:38 < lkcl> kanzure, the problem is that the study of the neurons themselves is not being properly forwarded to people-working-on-simulation/NNs 06:39 < lkcl> for example, it's not even understood in the standard AI community that neurons can perform boolean logic *including XOR* 06:39 -!- mrdata [~mrdata@user/mrdata] has quit [Ping timeout: 258 seconds] 06:39 < kanzure> there's a lot of good neurophysiology out there, sure 06:39 < lkcl> and it's *definitely* not understood that neurons can perform both differentiation and integration with respect to time 06:40 < lkcl> the entire fundamental assumption of AI NNs is: "take the sum of the inputs and do s*** with it" 06:40 < muurkha> really? I thought basic physical considerations would make it obvious that neurons integrate over time 06:40 < kanzure> it's true that computational models of neurons have further to develop, although i'm not sure we need well-modeled neurons to get interesting behaviors out of these systems 06:40 < lkcl> which is about as deeply flawed as it can possibly get 06:41 < lkcl> the other assumption is that the dendrons only extend to a small localised area (the bits that "fire other neurons") 06:41 < muurkha> I don't think anyone working on ANNs believes their "neurons" are even vaguely accurate models of actual neurons 06:41 < lkcl> reality is that the dendrons of most neurons extend vast distances, distributed throughout all other cortexes 06:42 < lkcl> muurkha, and yet, they get billions of dollars of investment to create something that cannot say "no that's neither an apple nor an orange" 06:42 < lkcl> they attempt to classify it as a "borange" 06:42 < lkcl> (mathematically, i mean) 06:43 < lkcl> also the whole idea of "training" using 100,000 samples is completely f****g ridiculous 06:43 < muurkha> pretty sure image recogition ANNs have been able to distinguish between more than two classes of things for at least 50 years 06:43 < kanzure> "How thalamic relays might orchestrate supervised deep training and symbolic computation in the brain" https://www.biorxiv.org/content/10.1101/304980v1.abstract 06:44 < lkcl> anyone who's studied brain physiology knows that neural structures only require *one* example (one experience) to "learn" 06:44 < lkcl> all other "experiences" become comparative differential *additional* experiences that refine that initial first learning experience. 06:44 < lsneff> We’re so far away from brain simulation 06:44 < muurkha> they've been getting billions of dollars of investment because they've built systems that work in practice, not because they think they're close to brain simulation 06:45 < lsneff> Neuron models are wayyy too primitive 06:45 < lkcl> muurkha, the problem isn't what the network is trained *to* recognise, the problem comes when you show such a deep-trained network something it *hasn't* been shown before 06:45 < lkcl> train on apple 06:45 < lkcl> train on orage 06:45 < lkcl> no problem 06:45 < lkcl> show it a mouse and it literally goes "that's a borange" 06:45 < lsneff> And there are plenty of things in the brain other than neurons that play an active role 06:45 < muurkha> no, neural networks do not 'literally go "that's a borange"' 06:46 < lkcl> lsneff, indeed. mesolimbic dopamine system, chemicals that affect the supply, etc. etc. 06:46 * nsh notes that science is well past any one given scientist actually comprehending what they're doing at the frontier 06:46 < muurkha> lsneff: what do you think of OpenWorm? 06:46 < lkcl> muurkha, i'm simplifying :) 06:46 < kanzure> or nemaload if not openworm 06:46 < muurkha> lkcl: the problem is that your simplification is wrong, which is indistinguishable from you not knowing what you're talking about :) 06:47 < lkcl> muurkha, ngggh... /me hand-waving.... true... ish :) 06:47 < lkcl> i have resisted going into depth on it because i know i won't come back out of a study of the topic, for a decade 06:48 < lkcl> consequently i know enough to know there's something "wrong" but not enough to explain it fully to someone who *does* know the current research, in-depth, if that makes sense 06:49 < lkcl> hence why i'm going off of what my friend wai has done (paraphrasing, inaccurately) because he *has* spent the past... err... 35 years studying neuroscience 06:50 < muurkha> there are accessible overviews of current ANNs and quite a bit of open-source tooling; it would probably be worthwhile for you to spend, if not a decade, at least a few hours studying it 06:51 < muurkha> current ANNs do not attempt to be accurate models of neuroscience but they are the best available solutions for a wide variety of heuristic problems, and the optimization techniques they use are even more broadly applicable than the ANNs themselves 06:52 -!- ^ditto [~limnoria@crap.redlegion.org] has quit [Remote host closed the connection] 06:52 < muurkha> you might be able to apply them to problems that are of interest to you after a few dozen hours of investigation 06:52 < lkcl> muurkha, my interest lies in actually tackling the area of proper and full brain / consciousness implementation 06:53 < lkcl> for that purpose, anything that attempts to only do "heuristics" is of no value, other than as a lesson of what *not* to do 06:54 < lkcl> as in: i don't have any interest, at all, in any of the "standard" uses to which ANNs are applicable 06:54 < kanzure> here is my library: 06:54 < kanzure> https://diyhpl.us/~bryan/papers2/ai/machine-learning/ 06:54 < kanzure> i haven't been including recent works tho, so recommendations appreciated 06:55 < lkcl> the one that got me really excited recently was an ASIC by Intel that is capable of learning from a single example/experience 06:56 < lkcl> *that's* the sort of thing i'm interested to find out how they did it 06:56 < lkcl> because it's half-way to proper self-awareness / consciousness 06:58 < muurkha> yeah, one-shot learning is pretty exciting 06:59 < muurkha> https://proceedings.neurips.cc/paper/2016/file/90e1357833654983612fb05e3ec9148c-Paper.pdf is a NeurIPS (NIPS?) paper on the subject from 02016 which Google Scholar tells me has 4294 citations 07:00 < muurkha> I am amused to note that of the five authors four of them have Google email addresses that are based on their (legal?) names, but the fifth is countzero@google.com 07:01 < lkcl> https://www.intel.co.uk/content/www/uk/en/research/neuromorphic-computing.html 07:01 < lkcl> intel neuromorphic asics, codenamed "loihi" 07:01 < muurkha> okay but that's PR, I linked you to actual research 07:02 -!- mrdata [~mrdata@user/mrdata] has joined #hplusroadmap 07:02 < lkcl> i'm a few steps behind! :) 07:03 < muurkha> kanzure's library is well worth perusing 07:04 < muurkha> also that page seems to be about SNNs rather than one-shot learning? 07:07 -!- spaceangel [~spaceange@ip-78-102-216-202.net.upcbroadband.cz] has joined #hplusroadmap 07:18 < lsneff> Loihi is a pretty modular SNN array 07:18 < lsneff> I read a paper about implementing astrocyte analogs in loihi 07:22 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has quit [Read error: Connection reset by peer] 07:29 < muurkha> neat 08:12 < oxphi> muurkha: which library? 08:17 < muurkha> 13:54 < kanzure> here is my library: 08:17 < muurkha> 13:54 < kanzure> https://diyhpl.us/~bryan/papers2/ai/machine-learning/ 08:23 -!- mrdata [~mrdata@user/mrdata] has quit [Ping timeout: 256 seconds] 08:40 < oxphi> thx 09:12 < maaku> lkcl: modern AI is not trying to be bio-accurate 09:12 < maaku> and there's no reason to suppose it ought to be 09:22 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has joined #hplusroadmap 09:36 -!- HD36079 [~joey@m90-131-37-205.cust.tele2.lt] has joined #hplusroadmap 09:36 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 09:50 -!- HD36079 [~joey@m90-131-37-205.cust.tele2.lt] has quit [Quit: Leaving] 09:54 -!- HD36079 [~joeyskmat@2001:470:69fc:105::2:2143] has joined #hplusroadmap 09:56 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has quit [Read error: Connection reset by peer] 10:30 -!- Molly_Lucy [~Molly_Luc@user/Molly-Lucy/x-8688804] has joined #hplusroadmap 10:44 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 11:19 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 276 seconds] 11:43 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 11:49 -!- mrdata [~mrdata@user/mrdata] has joined #hplusroadmap 12:15 -!- ^ditto [~limnoria@crap.redlegion.org] has joined #hplusroadmap 12:15 -!- ^ditto [~limnoria@crap.redlegion.org] has quit [Client Quit] 12:16 -!- ^ditto [~limnoria@crap.redlegion.org] has joined #hplusroadmap 14:09 < fenn> most x-risks are not inarguably omnipotent: plague, asteroids, nuclear war, climate change 14:33 < kanzure> mutually assured destruction is surely a kind of omnipotence that defeats everyone right? 14:33 < kanzure> i suppose we never got evidence that countries would be actually effective at implementing true mutually assured irradiation of the planet 14:46 < maaku> Gee, I wish we had one of them doomsday machines. 14:52 < kanzure> how about an anti-doomsday machine instead? 14:57 < maaku> sorry was a quote from Dr. Strangelove 14:58 < maaku> I want a Star Wars global anti-ICBM shield 15:03 < L29Ah> i want my aliexpress orders delivered with mass drivers 15:51 -!- spaceangel [~spaceange@ip-78-102-216-202.net.upcbroadband.cz] has quit [Remote host closed the connection] 16:01 < kanzure> backup your data 16:02 < L29Ah> i do it daily automatically, and you should 16:22 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 16:35 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 18:07 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 258 seconds] 18:16 -!- Llamamoe [~Llamamoe@178235178120.dynamic-4-waw-k-1-2-0.vectranet.pl] has quit [Quit: Leaving.] 18:56 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::93] has quit [Quit: Leaving] 19:39 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 255 seconds] 20:03 -!- bulbasaur [~bulbasaur@207.253.236.155] has quit [Remote host closed the connection] 20:06 -!- bulbasaur [~bulbasaur@207.253.236.155] has joined #hplusroadmap 20:15 < maaku> test your backups 20:47 -!- mlaga97 [~quassel@user/mlaga97] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 20:47 -!- mlaga97 [~quassel@user/mlaga97] has joined #hplusroadmap --- Log closed Fri Jun 03 00:00:41 2022