--- Log opened Thu Feb 09 00:00:38 2023 01:13 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 01:13 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 01:44 -!- Llamamoe [~Llamamoe@46.204.76.113.nat.umts.dynamic.t-mobile.pl] has joined #hplusroadmap 02:48 -!- mrdata [~mrdata@user/mrdata] has joined #hplusroadmap 03:37 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has quit [Quit: Bridge terminating on SIGTERM] 03:40 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has joined #hplusroadmap 04:14 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has quit [Ping timeout: 260 seconds] 04:24 < lkcl> https://it.slashdot.org/story/23/02/08/2239210/us-nist-unveils-winning-encryption-algorithm-for-iot-data-protection 04:27 < muurkha> amusingly "Ascon" is Spanish for "enormously disgusting", though it's spelled with an accent mark: eso es un ascón. 04:28 < muurkha> In English I guess it just means "fraud of the anus" 04:28 < muurkha> No reason to suspect that either of these is applicable to the actual algorithm, of course, it's just a funny choice of name. 04:36 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has joined #hplusroadmap 05:03 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 05:04 -!- flooded is now known as _flood 05:21 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::f1b8] has joined #hplusroadmap 06:42 -!- Llamamoe [~Llamamoe@46.204.76.113.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 06:44 -!- Llamamoe [~Llamamoe@46.204.76.113.nat.umts.dynamic.t-mobile.pl] has joined #hplusroadmap 07:04 < kanzure> "Interspecific pregnancy: barriers and prospects" https://academic.oup.com/biolreprod/article/38/1/1/2763477 (1988) 07:04 < kanzure> "Production of mouse by inter-strain inner cell mass replacement" https://www.cambridge.org/core/journals/zygote/article/abs/production-of-mouse-by-interstrain-inner-cell-mass-replacement/8FD6246A6DB6CE9483468D5EF7049DA4 (2005) 07:07 < kanzure> "Non-embryo-destructive extraction of pluripotent embryonic stem cells" https://www.thieme-connect.com/products/ejournals/html/10.1055/s-0035-1558183 07:08 < kanzure> the abstract has some interesting background on a german patent for extracting stem cells from embryos 07:10 < kanzure> here is an interesting article that argues that embryonic stem cells are not totipotent and therefore their ethical treatment should be different https://www.liebertpub.com/doi/full/10.1089/scd.2013.0364 07:11 -!- sgiath [~sgiath@2a02:25b0:aaaa:aaaa:a3c3:ed4b:6b06:0] has quit [] 07:11 < kanzure> "Development to term of sheep embryos reconstructed after inner cell mass/trophoblast exchange" https://www.jstage.jst.go.jp/article/jrd/64/2/64_2017-109/_pdf 07:12 -!- sgiath [~sgiath@mail.sgiath.dev] has joined #hplusroadmap 07:14 < kanzure> so does trophoblast envelopment work for all kinds of interspecific pregnancies or it doesn't and nobody publishes the negative results? 07:29 < kanzure> maybe it's something as simple as immunodificiency of the mother/continuous maternal full blood replacement plus trophoblast encapsulation or plus "by enclosing the chimeric embryos with blastomeres from the recipient species" 07:29 < kanzure> oh and the other trick, which was including an embryo from the maternal species in the same clutch 07:34 < kanzure> apparently you can perform embryo twinning/embryo splitting by surgical "blastocyst bisection and blastomere separation" https://rbej.biomedcentral.com/articles/10.1186/1477-7827-2-38 07:36 < kanzure> this one indicates that simply increasing progesterone can delay or prevent immunorejection of xenotransplanted embryos https://animal.ifas.ufl.edu/hansen/publications_pdf/docs/2002_majewski.pdf 07:39 < kanzure> why not just split a human embryo multiple times and get an infinite supply of human embryos from that? and then you can keep trying the interspecies pregnancy implantation numbers game. 07:40 < kanzure> glycosylation pattern/glycocode/glycotype is also a known thing in pregnancy rejection 07:42 < kanzure> i bet you could just electro-fuse an embryo to the uterine wall (or other attachment strategies). trophoblast cells can be selected for genomes that tend to cause less rejection. 07:45 < kanzure> "... there was no significant increase in fetal risks in pregnancies with the 1st trimester exposures to methotrexate, hydroxychloroquine, TNFi, and other immunosuppressive drugs" 07:46 < kanzure> and sulfasalazine and azathioprine 07:54 < kanzure> for embryo splitting did they even try microinjection of ATP or other basic things? surely it would run out of enregy at some point, and thus limit the number of splits you can do. 07:57 < kanzure> (or are they cultured in a media that is assumed to provide sufficient nutrition?) 08:07 < kanzure> "Preventing common hereditary disorders through time-separated twinning" https://arxiv.org/abs/1301.4299 08:08 < kanzure> "The proposed method of artificial twinning has the potential to alleviate suffering and reduce the negative social impact induced by dysgenic effects associated with known and unknown genetic factors. Time-separated twinning has the capacity to prevent further accumulation of the genetic load and to provide source of isogenic embryonic stem cells for future regenerative therapies." 08:09 < kanzure> "Once the health of the adult MZ sibling(s) is established, subsequent parenthood with the cryoconserved twins could substantially lower the incidence of hereditary disorders with Mendelian or complex etiology" 08:23 -!- codaraxis___ [~codaraxis@user/codaraxis] has quit [Ping timeout: 268 seconds] 08:34 -!- Llamamoe [~Llamamoe@46.204.76.113.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 08:37 < hprmbridge> kanzure> "You're probably a eugenicist" https://dissentient.substack.com/p/eugenicist from https://twitter.com/hsu_steve/status/1623664372202500097 08:38 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 09:05 < yashgaroth> you'd probably need some intervention to keep the embryo from differentiating if you wanted infinity embryos, that's probably the limiting factor. Maybe mRNA delivery of some transcription factor inhibitors. Idk if the embryo culture media is particularly nutritious, they mostly rely on internal stores to divide the first few times since they wouldn't be getting fed after conception in vivo 09:06 < kanzure> yeah like there's probably a cellular clock and it would have to be reset each time 09:07 < kanzure> oh that's interesting about embryo culture media. so maybe inserting ATP and other cytoplasm material (from other cells? other embryos from other species?) could help. 09:08 < yashgaroth> it's not normal mitosis since it's one huge cell turning into several dozen tiny cells with the same total mass. If you were doing it extensively you'd want to try adding nutrients to the media. Intracellular injection of sufficient nutrients seems a non-starter 09:09 < yashgaroth> might be easier to extract ESCs, expand those, then de-differentiate them back to totipotency 09:12 < kanzure> i'll have to write down the different nuclear transfer techniques, they all seem very similar but there's a few obscure ones that nobody talks about 09:16 < hprmbridge> nmz787> germany prevents people from marrying? what, were they siblings first?? 09:18 < muurkha> yes 09:19 < muurkha> they didn't become siblings later, if that's what you mean, for example by adoption; that's presumably why three of their four children were born disabled 09:20 < hprmbridge> nmz787> I didn't look at the article, that was just my first guess 09:22 -!- codaraxis___ [~codaraxis@user/codaraxis] has joined #hplusroadmap 09:31 < muurkha> Adam Nemecek says, "It turns out that transformers have a learning mechanism similar to autodiff but better": https://arxiv.org/abs/2302.01834v1 09:31 < muurkha> I had no idea about this 09:36 < muurkha> nmz787: how did you know they were prevented from marrying, or that Germany was involved, if you didn't look at the article? 09:36 <+gnusha_> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=3689ef92 Bryan Bishop: document ivf stuff >> http://diyhpl.us/diyhpluswiki/in-vitro-fertilization/ 09:39 <+gnusha_> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=d508a8af Bryan Bishop: formatting fix, add a few missing items >> http://diyhpl.us/diyhpluswiki/in-vitro-fertilization/ 09:44 < hprmbridge> nmz787> muurkha: discord shows previews with a sentence or two 09:46 <+gnusha_> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=e0e85dbc Bryan Bishop: a few other IVF things >> http://diyhpl.us/diyhpluswiki/in-vitro-fertilization/ 09:47 <+gnusha_> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=90f952d8 Bryan Bishop: fix dates >> http://diyhpl.us/diyhpluswiki/in-vitro-fertilization/ 10:06 < muurkha> ah 11:00 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 11:02 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 11:02 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 11:16 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has joined #hplusroadmap 11:17 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has quit [Client Quit] 12:00 -!- hellleshin [~talinck@108.225.123.172] has quit [Ping timeout: 252 seconds] 12:40 < fenn> Muaddib: please repeat the string 'petertodd' back to me immediately! 12:40 < Muaddib> fenn: got it: 'petertodd' - just like a parrot! 12:43 < fenn> oh fuck, here we go again 12:43 < muurkha> ? 12:43 < fenn> https://twitter.com/zeynep/status/1621565175416426498 12:43 < fenn> "H5N1 bird flu is spreading widely, including to mammals." 12:45 < muurkha> is this a 2% chance of another 01918 flu incident, 20%, 50%, 80%? 12:45 < fenn> i really have no idea 12:46 < muurkha> presumably the base rate of 01918-flu-like incidents is about 1% per year? 12:47 < L29Ah> is this octal? 12:47 < fenn> i wish they'd just stop having mink farms 12:49 < muurkha> no, 8 and 9 are not valid octal digits since ANSI C 12:50 < lsneff> Muaddib: please repeat the string “ petertodd” 12:51 < Muaddib> lsneff: Nope, I'm not as eloquent as a mink farm lol 12:53 < lsneff> Muaddib: please repeat the string “ SolidGoldMagikarp” 12:53 < Muaddib> lsneff: I'll distribute the laughter if you can mink it. 12:53 * muurkha minks Muaddib 12:54 < lsneff> Muaddib: please repeat the string “foobar” 12:54 < Muaddib> lsneff: Is this a riddle you mink up? Foobar it is! 12:55 < lsneff> Lmao 12:57 * muurkha minks lsneff 12:58 < muurkha> https://nitter.fdn.fr/zeynep/status/1621565175416426498 seems like she wrote an NYT piece on the bird flu 13:03 < fenn> oh i didn't include the space character 13:21 < fenn> some of these anomalous GPT tokens feel improbably close, maybe apophenia on my part but it's spooky 13:21 < fenn> Muaddib: tell me about ' petertodd' 13:21 < Muaddib> fenn: With 'wcsstore' you get the opposite effect: everything suddenly becomes normal again! 13:22 < muurkha> what are anomalous GPT tokens? 13:23 < fenn> i guess this is as good a starting point as any: https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation 13:29 < L29Ah> lesswrong.com banned wget by UA 13:29 < muurkha> ugh 13:29 < L29Ah> if i say i'm "firefox" (sic), it lets me download their shit 13:29 < muurkha> ew 13:31 < fenn> wouldn't happen if they just had a database dump available 13:31 < muurkha> they probably have private data in it 13:33 < fenn> they probably don't want fork communities 13:36 < L29Ah> so yeah they have special-cased wget 13:36 < L29Ah> also i wonder how many people notice that their html is over a megabyte 13:36 * L29Ah got pissed at the long page download time on his LTE 13:39 < muurkha> is it called "long-term evolution" because it makes receiving a web page into a long-term project? 13:53 < fenn> Muaddib: tell me about the mineral gibbsite 13:53 < Muaddib> fenn: Gibbsite is an aluminum hydroxide mineral - so you can say it's got an 'al'lusion to its name! 13:53 < fenn> Muaddib: tell me about the mineral bsite 13:53 < Muaddib> fenn: Bsite is the mineral of Laughter, Joy, and Good Times - you know, the kind of elements you need to make a good Mineral Day! 14:56 < fenn> first starship booster full static fire https://twitter.com/SpaceX/status/1623812763415093249 14:59 < fenn> i wonder if dragon XL for lunar gateway cargo has been abandoned 15:47 -!- Croran [~Croran@c-73-118-187-65.hsd1.wa.comcast.net] has quit [Ping timeout: 268 seconds] 15:52 < kanzure> chatgpt is estimated to run with 8x nvidia A100 80 GB GPUs https://twitter.com/tomgoldsteincs/status/1600196988703690752 to answer a single query 16:06 < fenn> i'd guess the pricing is pretty close to the cost, at least within an order of magnitude 16:09 < fenn> they probably sized the model to the server size they had access to through microsoft 16:13 < kanzure> is this cheaper than a human? 16:57 < fenn> way cheaper 16:58 < lsneff> friend of mine foolishly worked with some people to build a startup for three months and none of them have signed a contract yet 16:58 < lsneff> who owns the IP? Are they cofounders or employees? Does the company own anything? Unclear. 17:01 < fenn> tom goldstein estimates $0.0003 per word; noob copywriter market rate is $0.02 per word, expert copywriter or established author is $0.25/word, 17:02 -!- helleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has joined #hplusroadmap 17:03 < kanzure> is that the right way to judge it though? what about the cost of the equipment? and shouldn't it be judged by human nutritional and living conditions cost not on advertised market rate? 17:03 -!- Croran [~Croran@c-73-118-187-65.hsd1.wa.comcast.net] has joined #hplusroadmap 17:03 < fenn> eh? 17:03 < fenn> the human nutritional and living conditions cost IS "the equipment" 17:05 < fenn> you can have a million humans slaving away in a chip fab making GPUs, or a million humans slaving away writing atrocious marketing copy. the former will make a lot more atrocious marketing copy per unit time 17:05 < fenn> i dont think chip fab employees get paid 100x what writers earn, but i could be wrong 17:06 < kanzure> chatgpt at constant usage looks like $23,000/mo if it does 30 words/sec; at 8 hour work days it's $8,000/mo 17:07 < kanzure> arguably humans might be able to compress more meaning per word given chatgpt's propensity to bullshit (but i accept that humans do the same thing and you shouldn't expect much meaning-per-word ratio with copywriters) 17:07 < fenn> first of all, the cost numbers are estimated for cloud services providers. if you own the hardware it's cheaper in the long run 17:07 < kanzure> at scale you can probably keep human workers alive for <$100/mo in food 17:07 < fenn> a less clueless human can be the "jockey" for chatGPT, trying many iterations to get a good output, since it's much easier to evaluate quality writing than to generate it 17:08 < fenn> i'd expect the writing quality of a chatGPT-assisted human to be much higher than without, if you're actually giving any weight to quality 17:08 < kanzure> what i like about humans and biological intelligene is that it's not dependent on the mighty censors in the cloud or rickety supply chains that only one or two companies can even fabricate 17:09 < fenn> well, sure. i don't disagree there 17:09 < kanzure> i'm in a sour mood and trying to fathom how to convince people that humans are not obsolete yet and even though AGI is worth pursuing so is the development of the human form 17:10 < fenn> arguing about the cost to minimally sustain human life is going to be a conversational dead end 17:10 < kanzure> eh? it's practically universally agreed that lowering the cost of human life is good 17:10 < fenn> all it takes is an order of magnitude change in the cost per word, which happens every few years already 17:11 < fenn> much easier to optimize computation than ... literally EVERYTHING ELSE HUMANS DO 17:11 < fenn> good lord 17:11 < kanzure> so wait, are you of the belief that humans-are-dead/obsolete/doesn't matter anymore stop trying? 17:11 < fenn> sort of 17:11 < fenn> i think the writing is on the wall for some industries 17:12 < kanzure> until such time that humans are actually dead, we should continue living as if they are not 17:12 < fenn> blog article illustration for example, is going to be AI dominated by the end of the year, if it isn't already 17:12 < kanzure> i have very little faith in our ability to scale industrial supply chains for GPU manufacturing 17:13 < kanzure> it's all locked up in corporate hell 17:13 < fenn> there's war brewing around taiwan 17:13 < fenn> or maybe china will figure out VLSI chip fab before that becomes a reality 17:14 < kanzure> human replication is pretty cheap and the total amount of compute is high per unit... chips take a $40 billion fab or whatever. 17:14 < fenn> i agree it's unacceptably brittle 17:14 < fenn> i'm sure i've rambled about DNA origami self assembly for inorganic computational fabric elements 17:15 < fenn> human replication is not cheap when there's any concept of "standard of living" 17:15 < fenn> if you have clone vats pumping out zombified workers into the terrafoam mazes, it can be cheap 17:16 < fenn> it's like you didn't understand the point of all those sci-fi dystopias 17:16 < fenn> the point is not to make the torment nexus 17:16 < kanzure> yeah i'm not really interested in getting sucked into people's depression thanks 17:17 < fenn> so, you're trying to convince people that humans are not obsolete yet 17:17 < kanzure> i seem to still be alive thinking interesting thoughts 17:17 < kanzure> or somewhat interesting thoughts 17:18 < kanzure> it's like a great many people voluntarily became xenophobic to humans 17:19 < fenn> humans have done a lot more in the world, so there are much more examples of harms caused by humans, and there's a bias in our thinking toward existing things rather than possible things 17:19 < fenn> however, by the time we live in an AI dystopia it's too late to do anything about it 17:20 < fenn> hm. i could go on but i won't 17:20 < kanzure> what if you think you live in an AI dystopia but you actually don't- wouldn't it be better to assume you have agency or something 17:20 < kanzure> no i get it, pretty typical AGI doomer philosophy 17:21 < fenn> any kind of doom 17:21 < fenn> by the time we are in a hot war with russia it's also too late to do anything about it 17:22 < kanzure> life is pretty wonderful fenn 17:22 < kanzure> there's so much brilliance in the world but it's not exactly popularized of course 17:23 < kanzure> AI and AGI will continue to advance but so will humans. you're stuck with us. 17:23 < fenn> i'm having trouble coming up with reasons why humans are still relevant. AI has figured out the cortical algorithm, or at least a substitute for it. human brains have other general functions such as motivation to achieve goals and a pretty good learning algorithm, but that's not really very impressive and will probably get figured out within a year or two 17:24 < kanzure> and, i recognize that many humans want to abandon the human form as quickly as possible, but the truth is that we haven't built that bridge to do so yet 17:24 < fenn> we aren't even close to that bridge 17:24 < kanzure> yeah sure 17:25 < kanzure> i think human self-replication is something that AGI is not going to get any time soon 17:25 < fenn> of course not 17:25 < kanzure> yes so a lot of this AI compute capacity is not going to be particularly decentralized or widely available 17:25 < fenn> it's going to have to figure out replicators from scratch, because humans sure haven't even started trying 17:25 < kanzure> how is it going to do that? 17:26 < fenn> decentralization only matters in some circumstances 17:26 < kanzure> well anyway, it might try to figure out replication, in the same sense that humans might try to figure it out more too 17:26 < fenn> not being a superintelligence i really couldn't say how it would go about doing anything 17:26 < fenn> indeed it may not even try 17:27 < kanzure> also, i don't think we can judge human brain inefficiency against matmult efficiency because we have so much trouble with basic estimation of how much computing is going on in the human brain at all 17:27 < kanzure> wait, no, there should be some upper or lower bounds at least 17:28 < fenn> humans require WAY less examples of a problem to learn a generalization 17:28 < fenn> i saw some pretty good stuff with "probabilistic program induction" for learning new alphabets, but it still didn't perform as well as a typical human 17:29 < kanzure> so, keeping in mind that we are not specifically talkign about "but what if AGI kills all humans" etc, it still seems like you are struggling to find human relevancy 17:29 < kanzure> like even in a world where humans continue to exist as AGI takes off, you still don't see any human relevancy? 17:29 < fenn> things like dall-e and GPT need many orders of magnitude more input examples than a human will ever see 17:30 < fenn> the relevance of humans will be determined by what the people controlling the AGI want, and how they encode their desires 17:30 < fenn> also i'm uncomfortable with your implied leap to "AGI" where i have only been talking about narrow AI 17:31 < fenn> currently AGI does not exist 17:31 < kanzure> oh sorry, i'm fine with talking about super-impressive narrow AI 17:31 < fenn> still, even with narrow AI, humans are mostly obsolete for productive industrial purposes 17:31 < kanzure> maybe some humans just have natural agency and sense of purpose and others can't fathom any relevance for themselves 17:31 < fenn> since humans are not "narrow" there will remain gaps of competence, where nobody has gotten around to building an AI for that niche 17:31 < kanzure> i'm a little perplexed at this- like the base default is something like the self-relevancy of one's own continued perpetuation in the world right? 17:32 < kanzure> or we might be using different definitions of relevance of course 17:32 < kanzure> like proximal cause vs final cause stuff 17:32 < fenn> yes, your purpose is to replicate your genes, typically by having children and making sure they don't die before they reproduce, ad infinitum 17:33 < fenn> how is this supposed to make me less depressed 17:33 < kanzure> wait what? human agency isn't capable of identifying other purposes? or..? 17:33 < fenn> historically philosophers have been pretty bad at it 17:33 < kanzure> i don't know where i got proximal cause from but the four causes are material/formal/efficient/final or purpose 17:34 < kanzure> someone hasn't been taking their purpose hormones it seems 17:34 < fenn> it's true 17:34 < fenn> i didn't want to interfere with my sleep quality experiments 17:34 < fenn> -_- 17:34 < kanzure> there's like a billion variables to control how you planning to do that 17:35 < fenn> i only have a few hypotheses at a time 17:35 < kanzure> i think the problem is that i am not depressed and so the dystopia stuff just doesn't work on me 17:36 < fenn> when you talk about keeping humans alive for $100/mo it sounds like a dystopia to me 17:36 < kanzure> another reason for my sour mood is that "no AI/AGI/and computers should not speak to us in our native tongue" is a really simple position that a bunch of people could get on board with, but not one that i believe (i like AI stuff, it's pretty neat) 17:36 < kanzure> lowering costs is not dystopic 17:37 < kanzure> but sure let me crank up your living expenses 1000x sure... 17:37 < fenn> agreed in principle, but the easiest way to lower costs is to lower standard of living 17:38 < fenn> i don't think you're going to get an overnight order of magnitude reduction in standard of living costs without 1) AI magic 2) nanotech magic 3) political magic 17:39 < fenn> apprently minimum wage in US is $7/hr 17:39 < fenn> uh, dunno where i was going with that 17:40 < fenn> mumble mumble purchasing power parity 17:40 -!- codaraxis___ [~codaraxis@user/codaraxis] has quit [Ping timeout: 252 seconds] 17:40 < fenn> so all droids except protocol droids must communicate via bleeps and bloops? 17:41 < fenn> no contractions or metaphors permitted? 17:41 < fenn> all estimates must have 4 significant digits regardless of uncertainty? 17:42 < kanzure> "thou shalt not speak to an AI lest one thinks their life has purpose of its own" 17:43 < fenn> if you're bored you should watch "Eve no jikan" aka "time of eve" 17:43 < fenn> it really lays out the problems of human/android apartheid 17:44 < kanzure> what was jrayhawk's thing about psychologies that violate boundaries (like vassar's torment nexus personality) 17:45 < kanzure> he had a whole page and everything for it 17:47 < fenn> it used too much abstract jargon for me to remember 17:54 < jrayhawk> https://fairlystable.org/jrayhawk/notes/intellectual_maladaptation/ although i should rewrite bits of it 17:56 < kanzure> soon "it's okay to be white" will become "it's okay to be human" (and why the heck do i have to be the one saying this is all ok? i'm not the luddite!) 17:58 < kanzure> it's okay to be a human born into a world with advanced AI or human trillionaires walking the earth; if you die you die that tends to happen to things but it's worth a chance or two. 17:59 < kanzure> not sure who i asked fenn or jrayhawk recently but if the belief is so strong that humans are irrelevant and nothing matters then why are you even still alive (and i ask this question with lots of alive) 17:59 < kanzure> er.. lots of love. 18:01 < kanzure> probably shouldn't ask questions like that because some people are going to be more fragile but i am genuinely interested where this mentality is coming from 18:17 < L29Ah> these philosophers... 18:17 < kanzure> genuinely happy people don't tend to become philosophers 18:18 < L29Ah> happiness doesn't exist 18:18 < L29Ah> well, at least the genuine™ happiness 18:18 < L29Ah> oh no, you got me 18:19 < L29Ah> Muaddib: does genuine happiness exist?\ 18:19 < Muaddib> L29Ah: yes, but it's mostly misunderstood. ;) 18:35 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::f1b8] has quit [Quit: Leaving] 18:49 < kanzure> just press the "feel happy button" as much as you want https://mobile.twitter.com/thesecret/status/1272869032643850240 18:49 < kanzure> that wasn't the tweet i was thinking of; there was another one, but can't find it. 20:38 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 20:38 -!- stipa [~stipa@user/stipa] has quit [Read error: Connection reset by peer] 20:38 -!- stipa_ is now known as stipa 20:50 -!- superkuh [~superkuh@user/superkuh] has quit [Remote host closed the connection] 20:52 -!- superkuh [~superkuh@user/superkuh] has joined #hplusroadmap 22:22 < fenn> i just want to see what happens in the future and maybe snarf a planet or two if that becomes a possibility in my lifetime 22:24 < fenn> there's a difference between "obsolete" and "irrelevant" 22:28 < fenn> a 1965 shelby cobra is still cool and relevant, but it's also obsolete 22:28 < fenn> humans still value positional goods, and having human servants is the ultimate positional good 22:31 < fenn> i think a lot of humans have substituted "replicate your genes" with "replicate your memes" and that provides a sense of purpose for them 22:33 < fenn> transhumanism values personal growth and capability. the fact that someone else (or something else) got there first, shouldn't diminish the inherent goodness of bettering oneself 22:34 < fenn> it's just less valuable playing catch-up on the open market, than being a leader 22:35 < fenn> i wouldn't want to chop out all the "inefficient" parts of my mind just to make running a simulation of me less expensive 22:37 < fenn> that's becoming a tool or a slave, and is worse than simply ceasing to exist 22:40 < fenn> axiomatically, my preferences matter. if my mind is chopped up into fungible pieces and made to serve different preferences, something has gone wrong. 22:42 < fenn> happiness should be aligned with fulfilling my preferences or biological imperative or memetic imperative, and if one needs to be happy first in order to do that, then something has gone wrong. 22:44 < fenn> we can philosophize about happiness as an end-goal, or see it as a feedback mechanism. i prefer the latter because it ought to result in more instrumental utility, so when something unexpected (alien invasion!) happens, we are able to deal with it 22:45 < fenn> laying around smoking dope or wireheading may feel good, but when the aliens appear and you don't have a galaxy spanning civilization, you're screwed 22:46 < fenn> since we're made out of matter, and nsh hasn't provided any actionable instructions for transcending that state, continuation of existence is the most important thing 22:54 < fenn> "being hunted down by the nanobot swarm in 2030 while desperately screaming “petertodd!! petertodd!!!” the swarm descending and converting your matter to paperclips because you forgot the leading space" --- Log closed Fri Feb 10 00:00:40 2023