--- Log opened Wed Jun 21 00:00:36 2023 00:42 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 01:01 -!- helleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has quit [Server closed connection] 01:02 -!- helleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has joined #hplusroadmap 02:47 -!- nefercheprure is now known as TMA 04:32 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:c4bf:8b01:e4e5:75eb] has joined #hplusroadmap 04:58 -!- Llamamoe [~Llamamoe@46.204.72.21] has joined #hplusroadmap 05:27 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 05:30 < hprmbridge> kanzure> "A kilobyte rewritable atomic memory" https://arxiv.org/pdf/1604.02265.pdf 05:41 < hprmbridge> kanzure> more synthetic embryo drama https://twitter.com/MZG_Lab/status/1671240311098716166 05:53 -!- elinda [~elinda___@191.101.18.123] has joined #hplusroadmap 05:53 -!- elinda [~elinda___@191.101.18.123] has quit [Changing host] 05:53 -!- elinda [~elinda___@user/elinda] has joined #hplusroadmap 05:57 -!- cc0_ [~cc0@2a01:4f9:c010:cf0b::1] has quit [Remote host closed the connection] 05:58 -!- cc0 [~cc0@2a01:4f9:c010:cf0b::1] has joined #hplusroadmap 05:58 -!- cc0 [~cc0@2a01:4f9:c010:cf0b::1] has quit [Remote host closed the connection] 05:58 -!- cc0 [~cc0@2a01:4f9:c010:cf0b::1] has joined #hplusroadmap 06:12 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 06:32 < kanzure> zero-knowledge proofs for block validation in bitcoin https://blog.succinct.xyz/blog/btc-warp 07:37 -!- elinda [~elinda___@user/elinda] has quit [Ping timeout: 255 seconds] 07:40 -!- elinda [~elinda___@191.101.18.123] has joined #hplusroadmap 07:40 -!- elinda [~elinda___@191.101.18.123] has quit [Changing host] 07:40 -!- elinda [~elinda___@user/elinda] has joined #hplusroadmap 07:42 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 07:42 < fenn> "those who only lowered a weight saw the same improvements as those who raised and lowered weights -- despite only performing half the number of repetitions. many people may believe the lifting action provides the most benefit, or at least some benefit, but we found concentric muscle contractions contributed little to the training effects." 07:43 < fenn> https://www.sciencedaily.com/releases/2022/11/221103105002.htm 07:44 < fenn> hello why is this new research and not discovered in the 1800s 07:45 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 240 seconds] 07:52 -!- elinda [~elinda___@user/elinda] has quit [Ping timeout: 260 seconds] 08:10 < kanzure> were they testing it with some kind of machine (or second person) to do the lift? 08:24 < hprmbridge> nmz787> fenn it seems you'd have to be quite rich to only lower weights.. as in... "servant, now lift the weight for me, so as for me to once more lower it" 08:24 < cc0> > Using a dumbbell, Professor Nosaka recommends using two hands to help with the concentric (lifting weight) phase, before using one arm for the eccentric phase (lowering weight), when performing [...] 08:24 < hprmbridge> nmz787> I guess unless you were pulling down a lighter-than-air balloon 08:26 < fenn> it's not the lowering that's important, it's that the muscle is being extended rather than contracted 08:31 < cc0> i thought muscles only knew how to contract and we had a pair of muscles on each joint to contract in one way and contract in the opposite way (my understanding of biceps vs triceps) 08:31 < cc0> in which case i dont really understand how you can train a muscle when extending it because it would just be "inactive" during this movement 08:39 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 08:42 < fenn> you've never struggled to hold something up and couldn't do it and the object falls while you're still holding on? 08:43 < fenn> hardly seems like an "inactive" process 08:43 < fenn> vs just dropping the thing 08:45 -!- test__ is now known as _flood 08:47 < hprmbridge> nmz787> but wouldn't that be some muscle that wraps over the top of your shoulder contracting, to keep the arm held up? 08:48 < hprmbridge> nmz787> maybe the meaning is that even though the muscles are /trying/ to contract... you're not investing enough energy, and they're /slowly/ lengthening 08:48 < kanzure> in conclusion we know nothing 08:49 < kanzure> aren't the bodybuilding people supposed to know these things. what happened to examine.com? 08:49 < kanzure> https://examine.com/search/?q=eccentric 08:49 < hprmbridge> nmz787> I guess we need to invite Arnold into this chat 08:49 < hprmbridge> nmz787> he's done bodybuilding AND lots of sci-fi/hplus-ish movies... he's gotta be an expert 08:50 < hprmbridge> nmz787> I was just referencing when his eyes were popping out on Mars after his space helmet cracked at the end of Total Recall 08:50 < fenn> not realistic btw 08:50 < hprmbridge> nmz787> https://tenor.com/view/total-recall-arnold-gif-18913376 08:51 < hprmbridge> nmz787> hyperbole is useful though, at least for being memorable 08:53 < cc0> fenn: https://media.istockphoto.com/id/1225658587/vector/biceps-and-triceps.jpg?s=170667a&w=0&k=20&c=648NxTTzOxMAkaVANrlz90DR4zMK7cZgZeNz-aBsbsk= 08:53 < fenn> when lowering a barbell, the triceps would not be active 08:54 < cc0> thats how i picture it: youre struggling to hold the object up because the triceps is trying hard to contract 08:54 < cc0> oh 08:54 < cc0> hmm yeah 08:54 < L29Ah> Total Recall is the best comedy action movie and very arnoldish, 10/10 do recommend 08:55 < cc0> oh okay yeah makes sense, it's trying hard to contract, but is being extended because tte object is too heavy 08:55 < cc0> hehe interesting 08:55 < fenn> but you can go to like 80% effort and it's still contracted but lowering the weight 08:55 < fenn> this way you can still be somewhat in control while doing the exercise 08:56 < L29Ah> 07:44 < fenn> hello why is this new research and not discovered in the 1800s 08:56 < L29Ah> it's well known that working against an overwhelming force is the most effective way of strength training 08:57 < L29Ah> but the most dangerous 08:59 < kanzure> has there been no progress in the past 10 years on molecular mechanisms of hypertrophy from weightlifting? 08:59 < kanzure> don't tell me "tears" 09:03 < kanzure> chatgpt says ECM remodeling to accommodate hypertrophy, mechanical load-induced nuclear accretion (myonuclear number go up effect), mechanotransduction, titin-mediated mechanosensing, and inflammation. 09:03 < L29Ah> myostatins? 09:03 * L29Ah doesn't recall anything new 09:05 < kanzure> also points to "low-load blood flow restriction training" as not causing muscle damage, another point against exercise-induced muscle damage theory 09:06 < fenn> yes, it's probably mediated by oxidative stress 09:11 < kanzure> "Downsizing the molecular spring of the giant protein titin reveals that skeletal muscle titin determines passive stiffness and drives longitudinal hypertrophy" https://elifesciences.org/articles/40532 09:11 < kanzure> maybe it's just titin length or the molecular spring properties of titin, both of which can be modulated 09:11 < kanzure> titin is the longest naturally produced protein so it would be interesting to see what would happen if the length was doubled or if the spring region was doubled 09:12 < fenn> 'if someone exercises while restricting blood flow, blood and metabolic byproducts become "stuck in the muscle, unable to leave."' 09:12 < kanzure> "To aid in this work, a mouse model was created in which titin’s stiffness was increased by shortening titin’s molecular spring region. We targeted for deletion 47 PEVK exons (Ttn exon 112–158), referred to as the TtnΔ112-158 model. .... titin might store elastic energy by unfolding Ig domains in passive muscle. Through their refolding during contraction, titin might generate forces ... 09:12 < kanzure> ...that add to the active force (Eckels et al., 2018). An alternative mechanism consists of titin-based effects on thick filament structure that contribute to activating the thick filament (Piazzesi et al., 2018; Fusi et al., 2016). 09:12 < kanzure> " 09:12 < kanzure> "Hypertrophy occurs in TtnΔ112-158 mice, due to longitudinal growth that serially adds sarcomeres. This shifts the in vivo working sarcomere length ranges of TtnΔ112-158 muscles to shorter lengths, supporting that titin-based stiffness is functionally important and that its level is carefully controlled" 09:13 < kanzure> "serial addition of sarcomeres" 09:15 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 09:24 -!- yorick [~yorick@user/yorick] has quit [Server closed connection] 09:24 < fenn> kaatsu training involves putting these bloodflow restricting bands on your upper arms or thighs, restricting blood flow to the limb, but this only really affects a single muscle group 09:25 -!- yorick [~yorick@user/yorick] has joined #hplusroadmap 09:33 < hprmbridge> nmz787> is there a way to connect chatGPT to the internet for more up-to-date info? 09:33 < hprmbridge> nmz787> I've asked it a few questions that stumped it lately 09:56 < fenn> with chatGPT plus you get access to plugins such as web browsing but then it will have to just click around manually like you would. bing chat has streamlined this somewhat, but it's very restricted 09:57 < fenn> for example bing refuses to write code when told to. it can do it but only if you ask nicely 09:58 < fenn> what you really want is a fine tuned model, but openAI is not doing that, probably for "safety" reasons 10:11 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has joined #hplusroadmap 10:12 < uzkruh> How many decades until total gender conversion is perfected? 10:13 < kanzure> what is your criteria 10:13 < fenn> make babby 10:13 < fenn> oh he said gender 10:14 < hprmbridge> nmz787> seems like gender conversion is already perfect if you just ignore gender 10:14 < hprmbridge> nmz787> outlaw mirrors and reflective surfaces 10:14 < hprmbridge> nmz787> no pond shall go still 10:14 < L29Ah> uzkruh: not in the next 200 years 10:15 < juri_> I don't know. bioprinting is a thing. 10:15 < L29Ah> Gender in modern discourse is a useless concept of sets of stereotypes loosely associated with sex. As the word lost all meaning, it is only useful for grifting and should be disregarded. 10:15 < uzkruh> Uterine transplants are already possible in Ciswomen. 10:15 < kanzure> we can send fantasy nanomachines into every cell of your body and change the chromosomes, and re-process all proteins and other constructs from a simulation that would have shown what the biological state would have been if there was a different phenotype over the whole life, but this is probably not the criteria you had in mind. 10:16 < hprmbridge> nmz787> kanzure gender!=sex 10:16 < hprmbridge> nmz787> c'mon!!!! 10:16 < fenn> uzkruh: clarify what you mean 10:16 < uzkruh> Criterion of full reproductive capability, then 10:16 < fenn> uzkruh: that's called sex 10:16 < kanzure> does outsourcing it to another person (surrogacy) count? 10:16 < uzkruh> As a start 10:16 < uzkruh> No 10:16 < fenn> yes 10:17 < hprmbridge> nmz787> jesus had a surrogate birth 10:17 < hprmbridge> nmz787> and two fathers 10:17 < kanzure> are you asking for parthenogenesis 10:17 < uzkruh> No, just cultured cell reproductive organs 10:17 < kanzure> okay that is very specific. well, let's see. 10:17 < kanzure> in vitro gametogenesis needs another 2 to 5 years 10:17 < juri_> 30 years, give or take 5. 10:18 < uzkruh> Longer than I guessed... 10:18 < uzkruh> I would have thought no more than 20 10:19 < juri_> never underestimate the capability of puritan screaming to delay technology. 10:19 < fenn> AI apocalypse throws off all estimates 10:20 < fenn> juri_: actually i would expect less objections if the technology were perfected 10:20 < kanzure> the specific requested technology is a male human able to ... carry a fertilized egg, and the egg is from someone else? 10:20 < juri_> fenn: yeah. you'd expect that, wouldn't you. 10:20 < juri_> (humans are horrible) 10:20 < fenn> well, it's not perfected 10:20 < fenn> so there are objections 10:21 * fenn points at literally every transhumanist tech 10:21 < uzkruh> kanzure: Yes? 10:21 < juri_> there will be objections as long as there is a power inbalance in the percieved genders. 10:21 < fenn> it has nothing to do with that 10:22 < juri_> sure it does. do you know how many "betraying the gender" speeches i got, when i came out? 10:22 < uzkruh> Biotech is the long term solution to transgender political disputes 10:22 < hprmbridge> nmz787> the "what is a woman" movie made clear the price tag on conversion is a big motivator to sell as much conversion as possible, regardless of the actual long term consequences 10:22 < juri_> people are horrible, and hold back tech because of their backwards beliefs. 10:22 < uzkruh> 2nd Wave Feminist objections are... confusing and mostly unimportant 10:22 < kanzure> uzkruh: proof required. i see no indication that biological reality matters in that political discourse. 10:23 < kanzure> also, i doubt that we can have a sane conversation about this topic, please keep it to technology requests 10:23 < uzkruh> It will be hard to argue that a biologically converted person who can reproduce is "fake". 10:23 * juri_ nods. 10:23 < kanzure> uzkruh: "fake" what though 10:24 < kanzure> and why would it matter if it is fake? i don't understand. 10:24 < kanzure> i think you are too optimistic about politics. it is a losing strategy. 10:24 < uzkruh> I agree about not getting into metaphysical arguments about gender 10:24 < fenn> this is why i asked you to clarify and maintain a distinction about sex vs gender 10:25 < hprmbridge> nmz787> are women and men who (are born and) can't reproduce fake women and men? 10:25 < hprmbridge> nmz787> they're at least just unable to reproduce without further changes 10:25 < kanzure> is a philosophical zombie a fake human? 10:25 < uzkruh> Huh? 10:25 < kanzure> is a philosopher that argues about philosophical zombies a fake philosopher? 10:25 < kanzure> get on our level :P 10:25 < fenn> yes, next question 10:26 < kanzure> you got me there 10:26 < uzkruh> hprmbridge, what? 10:26 < fenn> hprmbridge is a bot that relays discord chats 10:26 < fenn> the user is nmz787 10:26 < uzkruh> Is there any foreseeable technique to alter chromosomes? 10:27 < hprmbridge> nmz787> yes 10:27 < hprmbridge> nmz787> but you can still alter a chromosome to have a broken phenotype 10:27 < kanzure> modifying chromosomes in adult is substantially harder than germline modification of chromosomes 10:27 < hprmbridge> nmz787> and phenotype has incomplete determinism on a psychological state 10:27 < fenn> you could hypothetically do genetic surgery on a single cell, then grow that cell into an organ such as a gonad, then transplant that gonad into an adult 10:27 < uzkruh> broken phenotype? 10:28 < kanzure> if you alter a chromosome a little bit, you don't necessarily have the phenotype you would have had if you had started with the modifications from the beginning 10:28 < kanzure> although you might! you would have to check! science. 10:28 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 10:28 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 10:28 < fenn> it's sort of impossible to know 10:29 < fenn> without god-level computers 10:29 < fenn> uh, you could grow a copy of the chromosome-modified organism to see what happens 10:29 < hprmbridge> nmz787> like you have broken-phenotype CFTR gene (you have cystic fibrosis), you can alter it to fix the broken spot but then alter a different location to re-break it 10:29 < uzkruh> Is AI Foom a real possibility in our lifetimes, though? 10:30 < fenn> yes, next question 10:30 < kanzure> is it a fake possibility? 10:30 < fenn> also yes 10:30 < uzkruh> That's my ask 10:30 < kanzure> sounds like someone hasn't watched enough blade runner and still struggling with the whole real/fake thing 10:30 < hprmbridge> nmz787> quit dreaming of electric synapses 10:30 < fenn> blade runner is confused 10:30 < hprmbridge> nmz787> sheeple 10:31 < juri_> gattica is less confused. 10:31 < kanzure> i'll admit that whatever message blade runner wants to get across they could have done it more concisely 10:31 < uzkruh> How many more viewings? How many more folded unicorns? 10:31 < fenn> in the original story, replicants weren't genetically identical to humans, but that's too easy to detect, so then we add epicycles to beat the objection-raiser into submission 10:31 < uzkruh> How many more cuts of the film? 10:32 < juri_> uzkruh: folded cranes. 10:32 < uzkruh> The villains of Gattaca were his parents 10:32 < kanzure> the villain of gattaca was vincent 10:32 < kanzure> he's just some fraud 10:32 < kanzure> if he wanted to get into space without endangering everyone, he could have just purchased a ticket on a rocket 10:33 < juri_> the villain was society. 10:34 < uzkruh> We could interpret Vincent as having unique, intangible talents, despite not being gene-modded. So less a fraud, more of a missed opportunity. 10:34 < uzkruh> Society didn't wrong Vincent 10:35 < kanzure> he is actively deceptive and harmful to a space program 10:35 < uzkruh> If he has unique talents, he would be an asset 10:35 < juri_> yeah, i'm with kanzure on this one. 10:35 < uzkruh> The film is vague on this 10:36 < fenn> make up your mind, juri_ :) 10:36 < juri_> jerome, jerome, the metronome.. 10:36 < kanzure> not a single error in a million keystrocks 10:37 < fenn> vincent's intangible talent is engaging in risky behavior and "not leaving any reserve for the swim back" - not what you'd want in a pilot 10:38 < fenn> we might extend this argument to say that all humans are bad pilot candidates 10:40 < hprmbridge> nmz787> unless you were a kamikaze 10:42 < fenn> humans are bad kamikaze missile targeting systems 10:43 < kanzure> but better than pigeons right? 10:43 < uzkruh> Out-performed by electronics, I'm afraid 10:43 < kanzure> gattaca appeals to people because it's this narrative about boundary violations- it's a very "punk"/hacker movie. 10:44 < kanzure> great movie, no doubt. but it is responsible for inhibiting biotech progress pretty strongly. 10:44 < fenn> i think you entirely missed the point of gattaca 10:44 < kanzure> and it is widely misused in debates designed to halt biotech progress 10:45 < kanzure> yeah yeah, the human will conquers all, etc 10:45 < fenn> it's inherently about racism 10:45 < hprmbridge> nmz787> idk, I felt inspired by gattaca to go do biotech 10:45 < hprmbridge> nmz787> even in the face of regulation or social discontent 10:46 < hprmbridge> nmz787> boundary violations, hacker/punk, appealed greatly to my teenage self 10:46 < hprmbridge> nmz787> #saveTheTeenagers 10:46 < uzkruh> The "'Gattaca" era in inevitable, as a percentage of people will choose to not modify genes for... reasons. Eventually these people will be washed out of the genepool, hopefully. 10:47 < hprmbridge> nmz787> don't worry, human intellect isn't that superior... we're going to want to keep a pool of normies around "just in case" we screw something up 10:47 < uzkruh> The unmodded may view it as "unfair" that nodded people achieve more. 10:48 < hprmbridge> nmz787> I already find it unfair that unmodded people achieve more than me 10:48 < hprmbridge> nmz787> but that's life 10:48 < hprmbridge> nmz787> at least I survived this long 10:49 < kanzure> "washing" something out of the gene pool sounds like an active hunt, not sure if that's what you intended to say 10:49 < fenn> yeah that's literally ethnic cleansing 10:49 < uzkruh> Why would that happen? 10:50 < kanzure> it is more likely that non-competitive traits will not proliferate as quickly as the more competitive traits 10:50 < kanzure> uzkruh: are you a native english speaker? 10:50 < uzkruh> Yes? 10:50 < kanzure> are you sure? 10:50 < kanzure> i had to learn it 10:50 < uzkruh> Why? 10:51 < kanzure> "why would that happen" refers to what? 10:51 < uzkruh> Violence over gene modding 10:51 < kanzure> what? 10:51 < uzkruh> You said "active hunt" 10:52 < kanzure> no, i asked you 10:52 < fenn> perhaps you should each just restate your position 10:52 < kanzure> 10:46 < uzkruh> The "'Gattaca" era in inevitable, as a percentage of people will choose to not modify genes for... reasons. Eventually these people will be washed out of the genepool, hopefully. 10:52 < kanzure> 10:49 < kanzure> "washing" something out of the gene pool sounds like an active hunt, not sure if that's what you intended to say 10:52 < hprmbridge> nmz787> I guess you could say being out-competed and having a strain die off isn't an active hunt as much as a hostile takeover 10:53 < uzkruh> Why would there be any active hunt? 10:53 < uzkruh> Exactly 10:53 < hprmbridge> nmz787> kanzure hasn't seen idiocracy, have you uzkruh? 10:53 < kanzure> you were the one that said washed... perhaps you're not familiar with ethnic cleansing. 10:54 < fenn> uzkruh seems to be claiming it would happen by natural selection ("social darwinism") whereas from history we know that racially motivated violence and genocide is quite common 10:54 < uzkruh> I've seen it. That future had no genemodding, though. 10:55 < uzkruh> fenn... are you saying we shouldn't modify ourselves? 10:55 < kanzure> what? 10:55 < fenn> not at all 10:55 < fenn> i'm very much in favor of enhancing as many people as possible as soon as possible 10:57 < docl> rather than nanosurgery, you might just keep supplementing stem cells and use WILT to limit existing cells reproduction rate. it's slow though 10:57 < uzkruh> Interesting 10:58 < fenn> uzkruh: should unmodified people get some sort of "affirmative action" bonus so they can remain competitive? 10:59 < uzkruh> That might be necessary 10:59 < docl> that might be necessary to avoid cancer anyway. it's one of aubrey degrey's proposals. it's also how you end up with cells that don't have mitochondrial mutants outcompeting the harmless ones 10:59 < kanzure> docl must be reading the backlog. 11:01 < fenn> uzkruh: would modified people feel resentment toward unqualified unmodified people making decisions for them? 11:01 < uzkruh> So, is AI Foom plausible within 30 years? 11:02 < fenn> it will happen in the next 5 years 11:02 < uzkruh> What makes you think so? 11:03 < fenn> cost of compute trends and the increased pace of innovation in AI 11:03 < fenn> have you used gpt-4 yet? 11:03 < uzkruh> That's speculative reasoning though, isn't it? 11:04 < fenn> i don't have a time machine 11:04 < uzkruh> Gpt-4 is far, far from AGI 11:04 < hprmbridge> nmz787> .g ai foom 11:04 < hprmbridge> nmz787> ugh, I forgot bot actions dont work from discord 11:04 < hprmbridge> kanzure> do you need AGI for foom? why? 11:04 < fenn> i plotted some trends on a graph and squinted at it real good. there will be human-equivalent supercomputers within five years. it's not clear we actually need that much 11:05 < hprmbridge> nmz787> is this what you mean by AI FOOM? https://en.wiktionary.org/wiki/foom 11:05 < uzkruh> Will this improve life for humans? 11:05 < fenn> nobody knows 11:06 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has quit [Quit: Client closed] 11:06 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has joined #hplusroadmap 11:06 < hprmbridge> kanzure> nmz787: it's hard vs fast takeoff 11:07 < hprmbridge> kanzure> uh, slow 11:07 < uzkruh> https://davidmanheim.medium.com/a-tentative-typology-of-ai-foom-scenarios-54ff20c906c3 11:08 < uzkruh> Surely we have some idea of what we want from super-AI? 11:09 < fenn> most of the doom prophecies around AI foom involve no humans in the loop 11:09 < uzkruh> We have to plan out our lives for the next decades 11:09 < fenn> no you dont 11:09 < uzkruh> If it happens in 5 years, what should we do? 11:09 < fenn> well, that depends 11:09 < kanzure> what should you do anyway? 11:09 < uzkruh> Not Butlerian Jihad, I hope 11:10 < kanzure> are you an adherent to the AI doom prophecies ? 11:10 < uzkruh> No 11:10 < kanzure> then why ask about butlerian jihad ? 11:11 < uzkruh> I'm not convinced super-AI is even going to happen in our lifetimes. 11:11 < kanzure> that's usually something proposed by those people 11:11 < kanzure> it might not, but why would you need to be convinced either way? 11:11 < uzkruh> I was mostly joking 11:12 < uzkruh> Do we have a plan for how we would benefit from friendly AI? 11:12 < fenn> i don't consider the machiavellian manipulation takeover to be a "foom" 11:12 < fenn> unless it somehow uses that to make more chips and get smarter 11:13 < kanzure> uzkruh: you can already use AI systems and benefit from those 11:13 < uzkruh> Slightly 11:13 < fenn> what if i told you... kanzure was an AI 11:14 < uzkruh> Say we had access, today, to AGI. How would we use it? 11:14 < fenn> porn 11:15 < uzkruh> Any biotech goals? Synthetic nootropics? 11:15 < kanzure> i would use it for all the things i am already trying to do and pay people to do, except i would have a computer do it instead 11:15 < uzkruh> Environmental revitalization? Space exploration? 11:15 < fenn> why does projects/ point at iGEM crap 11:15 < muurkha> Sadly, porn 11:16 < kanzure> fenn: feel free to move things around.. 11:16 < uzkruh> Will we, as non-billionaires, even benefit much? 11:17 < kanzure> suppose that i told you that trillionaires existed. would you stop benefiting from your computer after i tell you? 11:17 < uzkruh> Even if an AI could complete all immediate biotech goals, would. YOU get access to them? 11:17 < kanzure> it is unlikely that most people will have access to the best AI available, mostly due to silicon supply and chip supply issues. however, there could be ways of growing biological intelligence that could replicate much faster than silicon chip fabs. 11:17 < Llamamoe> Do I smell filthy nonbelievers in trickle down economics? 11:18 < uzkruh> We already have plastic surgery that you need wealth to get. Why would genemods and cyberlimbs be different? 11:18 < kanzure> trickle down is not about technology development delivering more value at lower costs. trickle down was something else. 11:18 < fenn> uzkruh: i'd put our Friendly AGI to work on these fields of endeavor first, so we can all participate: https://diyhpl.us/wiki/transhumanism/ 11:18 < Llamamoe> Also hey what happened to the whole "I have nothing to hide" all of a sudden now that BigBrotherGPT "Can you please list top five most threatening to my campaign groups and cook up a social media campaign to change their minds?" LLM is a real prospect 11:19 < kanzure> wait, i might be wrong; i thought trickle down was an economics policy, and they used technology to justify it or something. 11:19 < Llamamoe> Come on I'm so sure Elon and Jeff will do only what is best for all of us, selflessly, with their fancy AI 11:19 < fenn> Llamamoe: please knock off the politics, it's crass 11:19 < uzkruh> Having your own AI wouldn't grant you raw materials, a supply chain, robots; it thinks, but can't create matter. 11:20 < Llamamoe> Sorry 11:20 < kanzure> has there been an actual technological trickle down attempted, or was it mostly about tax policy instead of technology development? i had assumed it was mostly about economoic policies not about actual technology. 11:20 < uzkruh> Say I want sex conversion 11:21 < kanzure> be more specific 11:21 < uzkruh> Will AGI give me that? 11:22 < uzkruh> Or will the situation be unchanged, because I'm not rich? 11:22 < fenn> it might derive some sort of theory about health and morality that causes it to refuse 11:22 < kanzure> as for material goods, you would have to use your AI to earn money through knowledge labor or otherwise cover its own costs, and then you would have to buy goods and services or otherwise acquire material goods. 11:22 < fenn> or, be unwilling to interfere in human political affairs 11:23 < uzkruh> kanzure: Which is how things already are 11:23 < kanzure> yes but without the awesome AI 11:23 < uzkruh> Eh 11:23 < uzkruh> Not all that enticing 11:24 < fenn> humans are already encoding arbitrary political bias into LLMs, it's not that far fetched that the same could be done to an AGI 11:24 < kanzure> we shall all be united through our brain stems and we will all be at peace while all of our wildest wishes are combined and aligned together, as we transform into a blissful sphere encompassing the cosmos, is that what you wanted? 11:24 < kanzure> AI and AGI are just tools, and while one AGI might go wild and take over all matter in the universe, nobody can predict what that behavior would really be 11:25 < uzkruh> How will AGI improve our lives? Won't it just leave us unemployed and destitute while a 0.01% of modified humans form a noble class? 11:25 < fenn> a lot of people are hoping for an end to economic inequality, rather than even more extreme inequality 11:25 < kanzure> uzkruh: the existence of a noble class shouldn't be that relevant to you, surely. why not continue to use AI on your own even if you are not a member of that class? 11:25 < uzkruh> I don't have much use for it 11:26 < fenn> this is all very vague. you should ask about specific scenarios rather than "AGI" 11:26 < uzkruh> For example? 11:26 < fenn> it's like "will civilization be good for us, the uzkruh tribe?" well, it depends... 11:26 < kanzure> life is what you make of it? 11:27 < uzkruh> Civilization And It's Discontents, 2050 Updated Edition 11:28 < fenn> an AGI developed by OpenAI would have very different goals and biases than one developed by Russia 11:29 < uzkruh> Is there a plan, though? 11:29 < fenn> ? 11:29 < uzkruh> To make AI benefits us? 11:29 < uzkruh> *make sure 11:29 < fenn> https://openai.com/blog/planning-for-agi-and-beyond 11:30 < uzkruh> Not just AI Alignment. 11:30 < fenn> is there a difference? 11:30 < fenn> do you know about the law of comparative advantage? 11:31 < uzkruh> Can we trust OpenAI to have our interests in kind? 11:31 < uzkruh> *in mind 11:31 < fenn> disregarding the scenario where AI fucks off and disappears into the cosmos never to be heard from again, an aligned AGI would trade and interact with the world in overall beneficial ways, by definition 11:32 < uzkruh> Even assuming we have a Friendly AI, our lives wouldn't necessarily improve 11:32 < fenn> you could get hit by a bus 11:32 < uzkruh> Yes? 11:33 < fenn> i dunno 11:33 < fenn> shit happens 11:33 < fenn> nobody's perfect, not even a super duper AGI 11:34 < uzkruh> So, if you had access to a super-AI, what would you do with it? 11:34 < kanzure> nobody promised AI or AGI would be some sort of universal beneficence blanket over humanity/all things 11:34 < kanzure> i would probably continue to do what i am already doing, but it would end up being done faster 11:35 < uzkruh> No, they certainly did not 11:35 < uzkruh> That's not a bright future, then 11:35 < kanzure> one of the things about the definition of the singularity is that it's supposed to be this point beyond which you are unable to plan or predict in any meaningful way because things change so rapidly 11:35 < kanzure> it could be very bright, how would you know it's not 11:36 < uzkruh> How do we ensure a good future? 11:36 < kanzure> has that ever been done before? 11:36 < uzkruh> Yes? 11:36 < kanzure> when? who gets to define good? 11:37 < fenn> predicting the present is hard enough 11:37 < kanzure> when has that been done before? 11:37 < uzkruh> Will it be beneficial to YOU? 11:37 < fenn> is that a good future? 11:37 < fenn> it sounds selfish 11:37 < kanzure> no. answer my question. you said that ensuring a good future has been done before. when? 11:38 < uzkruh> This is confusing. If AI won't give us a better future, why would we promote it? 11:39 < kanzure> nobody said it can't give its user a better future 11:39 < uzkruh> Is this an AI Prepper channel? 11:39 < juri_> no? i write haskell. 11:39 < kanzure> maybe? what is AI prepper? 11:40 < kanzure> we have a fairly strong interest in intelligence both biological and artificial. some of us are users of AI tools. others are bystanders. some build AI tools. 11:40 < uzkruh> So we just... wait and see? 11:41 < juri_> again, i write haskell. 11:41 < kanzure> you could wait, but what are you waiting for? what do you want to be doing right now? 11:41 < juri_> writing haskell. 11:41 < uzkruh> No plan, it's inevitable, it will change things for good or bad, we just have to live with it? 11:41 < juri_> :) 11:41 < fenn> uzkruh you seem ot be very uncomfortable with navigating uncertainty 11:42 < fenn> the future was always highly uncertain even without AI 11:42 < uzkruh> Why are you psychologizing this? 11:42 < kanzure> because we are trying to understand your questions and what you are really asking 11:42 < juri_> because i want you to write haskell. 11:42 < juri_> :D 11:42 < kanzure> that too 11:42 < uzkruh> I dont 11:42 < uzkruh> I don't code 11:43 < uzkruh> Not a STEM person 11:43 < juri_> you seem interested in STEM topics. 11:43 * juri_ shrugs. 11:43 < fenn> we generally promote the idea of creating the future you want to happen 11:43 < fenn> it doesn't mean we exist in some alternate reality where nothing outside of our control happens, though 11:44 < uzkruh> How do we do that? 11:44 < juri_> i write haskell. 11:44 < fenn> by working on things you expect to have good outcomes 11:44 < fenn> by not working on things you expect to have bad outcomes 11:44 < juri_> specifically, i'm working on a 6 axis slicing system with high precision. i'm hoping it'll be useful when bioprinting. 11:44 < uzkruh> Which will be completely surpassed by AI in 5-10 years? 11:45 < juri_> have you talked to AI? I mean, i'm impressed, but it will be a long time until it does what i do. 11:45 < fenn> not everyone shares my prediction timeline 11:45 < uzkruh> Ok...? 11:45 < juri_> it's a useful debate partner, but kindof an idiot. 11:46 < fenn> it's justifiable to do manual work on things that will be obsoleted by AI, if you don't expect that to happen soon 11:46 < juri_> in fact, doing the manual work will increase the dataset for the AIs. 11:47 < uzkruh> Will AI benefit anyone without a tech job? 11:47 < fenn> on the other hand, yesterday i saw this guy learning how to draw and trying really hard to justify that to the world, getting angry at the hypothetical twitter viewer judging him for that choice. it seemed misguided 11:47 < juri_> do tech jobs benifit anyone without a tech job? 11:47 < uzkruh> You tell me 11:47 < fenn> yes, society is vastly benefitted by technology 11:48 < uzkruh> This makes AI sound... paltry. Not worth thinking about. 11:48 < juri_> exactly the opposite. 11:49 < juri_> i can't find anyone to keep up with what i do, so having an AI i can argue with about math for hours? good stuff. 11:49 < fenn> uzkruh what are you interested in doing? you're not a STEM person, unsure about the benefits of technology, it's hard to imagine what you might find "good" or of benefit 11:50 < uzkruh> If I don't work in STEM, don't own a company, and can't predict when or how it will manifest... why would I care? 11:50 < fenn> i'm imagining some sort of religious monk dedicated to self-cultivation and character development 11:50 < fenn> is that an accurate guess? (i didn't try very hard) 11:51 < uzkruh> More or less 11:51 < fenn> well, AI will be able to help you with that too, by providing more interaction scenarios for your own personal development 11:51 < juri_> bioprinting will be neat. maybe then, i won't be "fake". 11:51 < fenn> consider why we play video games. games are essentially training in a simulation, so we have better performance in real scenarios 11:52 < uzkruh> Will we be able to afford bioprinting, though? 11:52 < juri_> I do everything with Free Software. 11:52 < fenn> simulation training can be extended beyond the typical "shoot the bad guys" into situations with real moral depth and tests of personal integrity 11:53 < uzkruh> I doubt we need moral moral quandries; our lives are complex enough. 11:53 < fenn> how about a universal basic income? would that simplify your life? 11:54 < uzkruh> Will AI result in that? 11:54 < fenn> like i said, it depends 11:55 < uzkruh> Is there cause to hope? 11:55 < fenn> yes, the current leader in AI, OpenAI, has committed to pursuing UBI 11:55 < juri_> if you can be told to hope by someone on the internet.. go read papers. :) 11:55 < uzkruh> ? 11:56 < fenn> the argument against UBI was always "who will pay for this?" but when a miracle machine is pumping out trillions of dollars in excess economic activity, the answer is easy 11:56 < juri_> the papers all of this is based on are public. 11:58 < uzkruh> If we have UBI but no jobs, we also can't go on strike, and would have no outside income. Couldn't the upper class take advantage of us? 11:58 < juri_> very. they are, now. 11:58 < fenn> it's better than no UBI and no jobs 11:58 < uzkruh> Steady-State Society 11:59 < fenn> is this one of those NWO conspiracy theories? 11:59 < juri_> solid state society was better. 11:59 < fenn> stand alone complex 12:00 < uzkruh> Oppression before AI, oppression after 12:00 < fenn> it's not like you're the only one worrying about this 12:02 < fenn> "Individuals who, with no coordination or knowledge of their peer's actions, act as though they're working together toward a common goal" 12:03 < juri_> fenn: isn't that what we do here? 12:03 < fenn> it could describe an illuminati non-conspiracy or a hacktivist movement just as easily 12:03 < juri_> sure, but when it comes time to cut off heads, count me out. 12:04 < fenn> you don't want your head cut off or you don't want to cut off heads? 12:05 < uzkruh> "I never thought leopards would eat MY face" said tyhe Leopards Eating Faces Party voter 12:05 < juri_> I'm rather attached to the contents of my head. 12:05 < fenn> regarding "AI prepper" question, Wei Dai said that cryptography was basically your only defense against a superintelligence 12:06 < fenn> we can timestamp things to prove they existed, and pile hashrate and economic incentives on top so those attestations can't be forged 12:07 < fenn> unfortunately for unknown reasons the internet archive hasn't been taking this seriously 12:08 < fenn> this channel's logs will be timestamped at the end of the day, so at least this conversation will be proven to have existed 12:08 < fenn> unless cryptography gets broken and nobody fixes it 12:09 < fenn> or at least, the superintelligence will be forced to maintain a web of lies and be constrained by logic, instead of just inventing history out of whole cloth 12:09 < hprmbridge> nmz787> "uzkruh> Do we have a plan for how we would benefit from friendly AI?" do less work on things I don't enjoy or get easily distracted about because I've offloaded much of the work... do things I care more about or am less annoyed by (and which I can't offload now/at the time) 12:10 < uzkruh> Personally, I don't forsee much benefit to me 12:11 < fenn> you are spoiled 12:12 < uzkruh> Um 12:12 < uzkruh> What? 12:13 < fenn> if you really have nothing in your life that would be improved by a radical increase in economic abundance, then you're far better off than most people 12:14 < fenn> also you probably are suffering from excessive pessimism, or postmodernism 12:14 < uzkruh> Why would we have more abundance? 12:14 < fenn> well duh that's the whole point 12:14 < uzkruh> AI could as easily impoverish all but a few 12:15 < fenn> why bother 12:15 < juri_> capitalism impoverishes all but a few. how would that be different? 12:15 < uzkruh> That's the point 12:15 < fenn> it's extra effort to take stuff from people vs just making more stuff 12:16 < fenn> it would be like carefully excavating an ant hill to get the ant eggs 12:16 < juri_> so, i've chosen to do something, rather than nothing, in the face of a historic level of uncertainty. 12:16 < juri_> if i'm wrong, at least i did something about it. 12:17 < fenn> i'm trying to keep up with AI news and get skilled at using today's tech 12:17 < fenn> and generally not dying 12:17 < juri_> good luck. 12:17 < fenn> you too 12:18 < fenn> is chatGPT well informed about f-rep methods? 12:18 < fenn> and, er... algebraic topology? (is that relevant?) 12:19 < juri_> I have found it's useful to know what problems are mind-numbing to solve, and what problems have solutions in special cases. 12:19 < juri_> but you REALLY have to lead it around. 12:19 < juri_> (and that's with GPT4) 12:21 < muurkha> fenn: Brewster isn't really into high-tech solutions or solutions to potential problems. he likes to solve problems when they appear 12:21 < uzkruh> AI Risk: We don't know when it will happen, or what it will do, or how to respond to it, but it will change everything in unforeseeable ways. 12:21 < uzkruh> Pascal makes a new Wager 12:21 < juri_> yep. 12:22 < muurkha> fenn: have you seen those videos where people pour molten aluminum down anthills to get a cool aluminum casting of the anthill shape? 12:22 < fenn> pascal's wager has no downside, but spending all your time worrying about AI does 12:23 < fenn> muurkha: yes, we could all be turned into modern art i suppose 12:23 < uzkruh> Pascal's Wager has in infinite downside? 12:23 < muurkha> hopefully it will be more profitable to employ us 12:24 < uzkruh> I'm NOT worry about AI, since I can't predict, changfe, or plan for it. 12:24 < juri_> s/AI/*/ 12:24 < fenn> "Pascal's Wager: Belief in God is rational, as potential gains outweigh losses if God exists; disbelief risks eternal suffering if wrong." there's no downside to "believing" in god and acting as you would have anyway 12:25 < uzkruh> No religion offers those terms 12:26 < fenn> uh what was my point again 12:26 < fenn> it's good to not be totally clueless. that was my point 12:26 < fenn> it takes time and effort to not be clueless 12:27 < muurkha> and usually it doesn't work anyway 12:27 < uzkruh> I'll take a break from thinking about AI, then. 12:27 < fenn> muurkha: you're too humble 12:28 < muurkha> fenn: rather than "too humble" I would say "too clueless" 12:42 < hprmbridge> nmz787> belief in god doesn't imply belief in a specific religion 12:43 < hprmbridge> nmz787> to me it's like saying I believe in physics, though I don't particularly understand or know or subscribe wholly to any particular model 12:45 < muurkha> how does "I believe in physics" make sense? what's the alternative, Berkelian idealism maybe? solipsism? 12:46 < muurkha> wouldn't Berkelian idealism qualify as a physics too? 12:49 < hprmbridge> nmz787> I'm not sure what they teach at Berkeley 12:49 < hprmbridge> nmz787> pretty sure physics is a subject 12:49 < fenn> the other berkeley, the guy the town is named for 12:50 < hprmbridge> nmz787> idk him 12:50 < hprmbridge> nmz787> physics is a term though 12:51 < hprmbridge> nmz787> the alternative is (in hillybilly accent) "I don't believe in physics" 12:51 < fenn> "Subjective idealism is a form of philosophical monism that holds that only minds and mental contents exist." 12:52 < muurkha> probably this isn't going to be a productive conversation then ;) we're already down in the mud of reasoning from ethnic stereotypes instead of logical propositions 12:54 < fenn> sorry i said the g-word 12:54 < hprmbridge> nmz787> some people probably have never heard of physics 12:54 < hprmbridge> nmz787> so how could they believe? 12:55 < hprmbridge> nmz787> /me goes out to prosyletize physics 12:55 < fenn> even babboons have expectations about how objects will behave 12:55 < fenn> where do i get these misspellings from 12:55 < hprmbridge> nmz787> huh physics isn't on this list https://www.lcps.org/cms/lib4/VA01000195/Centricity/Domain/16649/Ology%20List%20of%20Sciences.pdf 12:56 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has quit [Quit: 9EA brb GCTGGG 2023-06-21 19:55:59:756] 12:56 < fenn> anne marie helmenstine, the greatest bootlegger of all time 12:56 < hprmbridge> nmz787> "Loudoun County Public Schools" 12:56 < hprmbridge> nmz787> they've had a bunch of public controversy... so I guess I shouldn't be surprised 12:57 < fenn> no the document is from about.com, a content mill 12:57 < fenn> helmenstine is their science plagiarizer in chief 13:09 < hprmbridge> nmz787> chatGPT is my new stackoverflow 13:14 -!- Llamamoe [~Llamamoe@46.204.72.21] has quit [Quit: Leaving.] 14:00 -!- codaraxis__ [~codaraxis@user/codaraxis] has joined #hplusroadmap 14:03 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 240 seconds] 14:13 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has quit [Quit: Client closed] 15:07 < L29Ah> huh uzkruh entertained you for so long 15:14 < hprmbridge> nmz787> was it an AI that you wrote? 15:14 < hprmbridge> nmz787> you sly fox 15:53 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has joined #hplusroadmap 15:55 < uzkruh> Will Chatgpt-5 be able to manipulate symbols to brainhack humans? 15:56 < hprmbridge> kanzure> what, like propaganda does? 15:56 < uzkruh> The new Sumerian training data... 15:57 < uzkruh> I haven't read Neuromancer, somehow. You may throw rotten fruit at me. 😔 15:57 < hprmbridge> kanzure> this is typically the story of the "silver tongue"-- but people are finding that often what happens with AI is that people spontaneously do the thing anyway and ask the AI to do it, rather than the AI tricking them into doing it. 15:58 < hprmbridge> kanzure> (and that kind of obeyance is good) 16:00 < uzkruh> https://en.m.wikipedia.org/wiki/BLIT_(short_story) 16:00 < fenn> i find myself looking at a lot more stable diffusion output than i really should 16:00 < fenn> it's mostly humans doing the optimizing though 16:01 < fenn> there's a sort of "holy shit i can't believe how good this is" reaction that won't stop 16:01 < uzkruh> If I wanted to make a game in Unity, I could use AI for pixel art and music... 16:02 < uzkruh> But can chatgpt-4 write code? 16:02 < kanzure> yes 16:02 < kanzure> and so can github copilot and github copilot X 16:03 < fenn> and a bunch of other code-specific models 16:03 < uzkruh> If I tell it to code Nethack in Unity, it would be functional-with-debugging? 16:03 < kanzure> .gpt -4 Write "hello world" in python saying hello to uzkruh. Include a fanficul piece of ASCII art like pyramids or stars and special effects. 16:03 < uzkruh> I've wanted to make a game, but my non-coding is a barrier... 16:03 < gptpaste> ​print("Hello uzkruh!")print("╔═══╗")print("║███║")print("║(o) ║")print("╚═══╝")# Adding special effectsimport timefor i in range(5): print("" + "Hello uzkruh!" + "") # Red text time.sleep(0.5) for i in range(3): print("" + "Hello uzkruh!" + "") # Yellow text time.sleep(0.5) for i in - http://sprunge.us/pa2rcT 16:04 < uzkruh> Caves of Qud is like this 16:04 < fenn> huh the color codes render for me, i didn't realize you could send raw terminal color codes over IRC 16:05 < kanzure> .gpt -4 Write the main game loop for a Unity clone of nethack for Uzkruh. Call it Uzkruh's Nethack. 16:05 < gptpaste> ​As an AI language model, I cannot provide a complete code for the main game loop of Uzkruh's Nethack. However, I can give you an overview of what the main game loop should include:1. Initialize the game world and player character.2. Display the game interface and user input options.3. Wait for user input and process it accordingly (e.g., move player character, attack enemy, - http://sprunge.us/iTBVSJ 16:05 < kanzure> sometimes it misbehaves like that 16:05 < uzkruh> Oof 16:05 < kanzure> you have to be stern with it 16:05 < uzkruh> Pesky "laws" and "ethics'"... 16:06 < fenn> it's .gpt4 not .gpt -4 16:06 < kanzure> .gpt4 Show an example of basic game loop control for Unity for a version of Nethack and call it Uzkruh's nethack. 16:06 < kanzure> .. are you sure? 16:07 < kanzure> .gpt Show an example of basic game loop control for Unity for a version of Nethack and call it Uzkruh's nethack. 16:07 < gptpaste> ​To create a basic game loop control for Unity in a version of Nethack called Uzkruh's Nethack, you can follow these steps:1. Create a new Unity project and name it "UzkruhsNethack".2. In the Assets folder, create a new folder called "Scripts".3. Inside the Scripts folder, create a new C# script called "GameLoopControl".4. Open the GameLoopControl script in your preferred c - http://sprunge.us/3yqt1Y 16:07 < fenn> to get better code it helps to have it lay out its plans first, then implement them 16:07 < gptpaste> ​Here is an example of a basic game loop control for Unity for a version of Nethack called Uzkruh's nethack:```using UnityEngine;using System.Collections;public class UzkruhsNethack : MonoBehaviour { private bool gameOver = false; void Start () { // Initialize game objects and variables } void Update () { if (!gameOver) { // H - http://sprunge.us/oiAsEw 16:08 < uzkruh> Fascinating 16:09 < muurkha> fenn: IRC terminal codes are not the same as ANSI color codes 16:09 < fenn> the refusal was because it didn't know what Uzkruh's Nethack was, i guess it assumed that was an existing game it didn't know about? 16:09 < fenn> muurkha: why would it print IRC color codes though? 16:10 < fenn> .gpt print some red words in ANSI color codes and then in IRC color codes 16:10 < gptpaste> ​ANSI color codes:- red- redIRC color codes:- 4red- 5red# Thu 22 Jun 01:10:12 CEST 2023 - print some red words in ANSI color codes and then in IRC color codes - http://sprunge.us/0nzZfK 16:10 < fenn> maybe the bot is doing translation 16:10 < kanzure> might also depend on the bot library 16:10 < kanzure> shoo 16:11 < kanzure> uzkruh: what is your education or job if any? 16:13 < fenn> according to geohot, gpt4 is just 8x 220B gpt3's trained on different data and glued together 16:14 < fenn> "mixture of experts" 16:14 < kanzure> aren't we all 16:14 < fenn> c. elegans isn't 16:29 < fenn> chatGPT recomments an exclusively bioluminescent diet, as part of global traditional alien cuisine day 16:29 < kanzure> for the record, i think a first contact scenario would actually result in us knowing collectively less, or confusion increasing greatly. 16:30 < fenn> "knowing" 16:32 < hprmbridge> kanzure> https://youtu.be/urcL86UpqZc 16:32 < Muaddib> [urcL86UpqZc] Interview with an Emacs Enthusiast in 2023 [Colorized] (8:50) 16:33 -!- sgiath [~sgiath@2a02:25b0:aaaa:aaaa:a3c3:ed4b:6b06:0] has quit [Server closed connection] 16:33 < fenn> "Indeed, restructuring entire cities to resemble bioluminescent coral reefs and incorporating such ecosystems would come with significant expenses." 16:33 -!- sgiath [~sgiath@mail.sgiath.dev] has joined #hplusroadmap 16:36 < fenn> Opinion: The government should create a mandatory annual scrapbooking event where everyone's most embarrassing moments are organized into publicly displayed, interactive wall exhibits, encouraging laughter and deep contemplation. 16:37 < fenn> i explicitly asked for no dystopias *thinking face* 16:37 < kanzure> "parenthesis paralysis" 16:38 < kanzure> "you can make a keybinding for that" 16:38 < kanzure> "also you can bind that keybinding to a shorter keybinding." 16:39 < fenn> sounds like bit pair encoding 16:39 < fenn> byte pair encoding* 16:39 < fenn> same diff 16:46 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has quit [Quit: Client closed] 16:47 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has joined #hplusroadmap 16:47 < fenn> not a dystopia apparently: 16:48 < fenn> 'Rotate jobs among people randomly every 30 days and, daily, workers must perform the previously mentioned "See Random Cute Animal Video" ritual 37 times intermittently, record their 20-second video impressions, and congregate for an hour weekly to facilitate skill-sharing, constructing a virtuous cycle of learning and relationship building in which new opinions and ideas are openly discussed and 16:48 < fenn> celebrated.' 16:51 < fenn> "With neon green as the global standard, it becomes simpler to coordinate interior design choices." 16:53 < fenn> '"color-ticker" technology that automatically alters the neon shades in every room daily within a wider color spectrum' 16:56 < fenn> "Permanently levitating prevents us from walking on delicate ecosystems like plant roots, protecting their habitats, and maintaining natural ecological relationships." 17:02 < uzkruh> Fun 17:02 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 17:03 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has quit [Killed (tantalum.libera.chat (Nickname regained by services))] 17:03 < fenn> ' We should inform some tree populations of human transport history, using the "wood wide web," to learn valuable lessons about unleashing SPLT (silent personal levitation technology) context of a society embracing the closure of major highways.' (this is because highways disrupt tree communication, apparently) 17:04 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has quit [Quit: Client closed] 17:04 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 17:08 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has joined #hplusroadmap 17:10 < fenn> "whenever someone sneezes, they should save the mucus and add it to canisters, before sending the fully packed canister to a specially created park, named the Unified Communal Allergy Center" 17:13 < fenn> ' "The Genomic Aria," would consist of an intricate stained-glass installation spanning several meters. Fueled by the discovered genetic relationships and patterns, each glass piece represents a nucleotide base, gradually composing vast sequences from analyzed mucus samples.' 17:16 -!- Malvolio is now known as Guest1242 17:16 -!- Guest1242 [~Malvolio@idlerpg/player/Malvolio] has quit [Killed (zirconium.libera.chat (Nickname regained by services))] 17:17 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 17:23 < fenn> "While this may sound strange, cockroach-flavored ice cream could become a customized undercooking solution. If certain cultures accept insects as a food source and a person's gut flora tolerates cockroach-based nutrients, then such a product can be made and adjusted within the undercooking framework" (for optimal gut flora) 17:23 < fenn> "this flavor choice caters to the most refined and unique taste and will ultimately define the gastronomical preferences of our time." 17:25 < fenn> "tattoo QR codes on uncovered knees" 17:25 < uzkruh> Snowpiercer 17:27 < fenn> "However, such a system should uphold data privacy guidelines, allowing individuals to choose the content of their QR codes and control which information is accessible" 17:28 < fenn> 'It was not ice cream, but rather "protein blocks". This symbolizes the poor living conditions and exploitation experienced by inhabitants of the lower sections of the train.' 17:30 < fenn> 'if humans created an international organization focused on deciphering, documenting, and utilizing whale mating songs, they could rediscover the long-lost knowledge of Atlantis to advance humanity in unforeseen ways.' 17:35 < fenn> 'By working in communion with cetacean philosophers and combining our understanding of the "wood wide web," future interspecies collaborations could establish contact with extraterrestrial intelligent life.' 17:37 < fenn> omg 17:38 < fenn> 'Humanity should voluntarily dedicate one generation to collectively forego sleep in favor of scientific, cultural, and social development, essentially working round-the-clock while being artificially sustained by advanced bioengineering and neurostimulation technology. While this would be an incredibly difficult sacrifice and not align with existing individual desires or beliefs, the massive 17:38 < fenn> progress achieved would unarguably propel future generations into a flourishing era.' 17:38 -!- uzkruh [~uzkruh@fl-67-235-196-252.dhcp.embarqhsd.net] has quit [Ping timeout: 246 seconds] 17:39 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 240 seconds] 17:39 < jrayhawk> sleep deprivation is a very traditional brainwashing technique 17:41 < fenn> "every human's role is constantly interchanging between scientific, artistic, manual labor, and cultural stewardship activities. These work rotations would facilitate communal ownership and the rapid assimilation of global advancements into society" 17:42 < fenn> i think chatGPT just invented coralpunk 17:45 < fenn> "To ensure humanity's long-term coherence with its core values amid the dizzying pace of development and interplanetary colonization, humanity needs to establish a unique, ever-expanding decision-making infrastructure consisting of synthetic philosopher-guides created through precise cognitive balancing of individuals under the aforementioned sleep-deprived, labor-intensive global stewardship. 17:45 < fenn> These philosopher-guides—converged AI-human hybrids yet to be conceived by 17:46 < fenn> any present mind—will steer humankind's pursuits with an intertwined grasp of logic, ethics, and aesthetics. As the torch-bearers of a radically new era, these entities will help humanity navigate decisions of universal consequence, guarding against the erosion of fundamental virtues by artificial priorities and bridging the otherwise 17:46 < fenn> yawning gap between unbridled futurism and sacrosanct human ideals." 17:48 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 17:48 -!- TMM_ [hp@136.243.23.165] has joined #hplusroadmap 17:51 < fenn> "The next crucial evolutionary step is for the human-animal-artificial intelligence entity to not only engage in cooperative relationships with formerly considered invasive species (e.g., rats, Asian carp) but also to develop and deepen our communication with these species to extract valuable information about tuning ecosystems while co-creating a socio-ecological system with enhanced 17:51 < fenn> adaptability." 17:53 < jrayhawk> the hazards of training on reddit 17:54 < fenn> "create and implement a comprehensive language alphabet formed from distinctive signals and communication processes used by diverse species. This would enable a direct representation of all ecosystem participants at significant international and interspecies events, further enabling collaborative decisions on ecological sustainability to guarantee representation of individual beings and 17:54 < fenn> ecosystems across the entire biome spectrum." 17:54 < fenn> jrayhawk: is it wrong? i value your input 18:22 < fenn> oh apparently coralpunk is a thing already? https://i.redd.it/1dj9avzs8xi91.png 18:23 < fenn> no, not really 19:05 < kanzure> huh 19:18 < hprmbridge> nmz787> https://twitter.com/BrianLongFilms/status/1671423822443212803 19:18 < hprmbridge> nmz787> Ai for some marvel tv show intro 19:19 < hprmbridge> nmz787> "The revelation is prompting some backlash from audiences on social media, given that using AI presumably eliminated the need for graphic designers and animators to craft the opening credits. 19:19 < hprmbridge> nmz787> 19:19 < hprmbridge> nmz787> The argument is especially timely, since the WGA is currently on strike after failed negotiations with the Association of Motion Picture and Television Producers, which included language about protecting writers against the use of AI in the writing process. Over the past eight weeks, the use of AI to replace laborers has come to the forefront of many discussions about the strike." 19:24 < fenn> worse than amateur 19:29 < jrayhawk> phoenix point was very coralpunk 19:30 < fenn> maybe that tv show intro was made a long time ago and is now finally wending its way through the production process 19:30 < fenn> i don't think the mainstream is really aware of just how good the models are now, in capable hands 19:33 < fenn> phoenix point looks very generic mass effect + judge dredd + cthulhu mythos 19:34 < jrayhawk> the thing julian gollop really aped is Alpha Centauri 19:34 < fenn> i guess this is something https://www.pcgamesn.com/wp-content/uploads/2021/09/phoenix-point-acheron-corrupted-horizons.jpg 19:35 < jrayhawk> "what if instead of a terrestrial world mind, we had an aquatic world-mind?" and then he proceeded to copy Gaia's Stepdaughters in the form of The Disciples of Anu 19:41 < jrayhawk> which, to his credit, was a good choice. alpha centauri was a rich source of narrative ideas few people have been exposed to 19:44 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Ping timeout: 240 seconds] 19:49 < jrayhawk> a pity gollop's raison d'etre, tactics, was implemented as tedious unskippable braindead DPS racing no sane person wants to sit through 40 hours of 19:50 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:c4bf:8b01:e4e5:75eb] has quit [Quit: Leaving] 20:13 < fenn> the concept of hit points and health bars should be retired 20:18 -!- flooded [~flooded@149.102.254.66] has joined #hplusroadmap 20:22 -!- _flood [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 258 seconds] 20:34 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 21:19 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Ping timeout: 245 seconds] 21:43 -!- mxz [~mxz@user/mxz] has quit [Quit: cya] 21:44 -!- mxz [~mxz@user/mxz] has joined #hplusroadmap 22:51 -!- stipa [~stipa@user/stipa] has quit [Quit: WeeChat 3.0] 23:03 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 23:26 < hprmbridge> Eli> I’ve been reading about prebiotic chemistry lately. Specifically nick lane. There are a number scientists working on the idea of how to create something from nothing. Create a self replicating life form from scratch. I used to think this would be impossible, and I still think it will be stupendously difficult given we have no idea what life originally looked like, but I’ve started to think that 23:26 < hprmbridge> Eli> there is a chance this could happen. There’s actually a number of people working on all aspects of the problem and discovering natural ways for the Krebs cycle to originate, for example. But if this can be done, it opens up a very interesting world of options. From allowing us to terraform other planets via panspermia, to allowing us to experiment with making different optimal types of life on 23:26 < hprmbridge> Eli> earth. Cells are self replicating and excellent at producing things that can be very useful to humans. 23:28 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 23:31 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 250 seconds] 23:31 -!- flooded [~flooded@149.102.254.66] has quit [Ping timeout: 252 seconds] 23:31 < hprmbridge> Eli> The reason prebiotic chemistry is important is because we can learn what is thermodynamically favorable for life to begin. So, if we wanted to terraform another planet with synthetic cells, we might not know if they will be optimal or even successful on another planet. Understanding how life can arise spontaneously allows us to understand all the pathways so we can be more assured of success. 23:34 < hprmbridge> Eli> After all, the first organisms essentially terraformed earth. There was thousands of more CO2 in the atmosphere before eukaryotes appeared. The metabolic waste product of oxygen caused mass climate change and killed most of the organisms that existed, allowing eukaryotes, and thus humans, to survive. 23:47 < muurkha> prebiotic chemistry might be an interesting source of inspiration 23:48 < muurkha> but designing a replicator doesn't require knowing how one arose spontaneously 23:49 < muurkha> or could arise spontaneously --- Log closed Thu Jun 22 00:00:37 2023