--- Log opened Tue May 30 00:00:15 2023 00:47 < fenn> i didn't read the paper, but the concept was mentioned recently here i think 00:55 < fenn> Rewiring Cortex: Functional Plasticity of The Auditory Cortex During Development: (ferrets) https://web.mit.edu/surlab/publications/Newton_Sur04.pdf 00:59 < fenn> "Like inventive electricians rewiring a house, scientists at the Massachusetts Institute of Technology have reconfigured newborn ferret brains so that the animals' eyes are hooked up to brain regions where hearing normally develops. 00:59 < fenn> The surprising result is that the ferrets develop fully functioning visual pathways in the auditory portions of their brains. In other words, they see the world with brain tissue that was only thought capable of hearing sounds." 01:23 < rumdumpster> Very important to see this. Now, questionably, can that be redone AFTER the initial development? 01:24 < rumdumpster> Thank you for the link, fenn. 02:17 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 02:48 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 02:52 -!- test_ [~flooded@146.70.183.131] has quit [Ping timeout: 265 seconds] 03:02 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 03:44 -!- tackleton [~tackleton@user/tackleton] has joined #hplusroadmap 03:45 -!- tackleton [~tackleton@user/tackleton] has left #hplusroadmap [Since you gotta go you better go now] 04:11 < hprmbridge> kanzure> there have been cases of gradual tumor development causing brain function and memory to shift around the head. 04:16 -!- rumdumpster [~rumdump@66.205.192.69] has quit [Killed (ozone (No Spam))] 04:36 < kanzure> hrm that didn't seem like spam to me 04:38 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 250 seconds] 05:48 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 248 seconds] 06:32 < muurkha> https://archive.ph/GJkSI nytimes article about the AI ban movement 06:43 -!- flooded is now known as _flood 06:55 < kanzure> i think that if i was seriously worried about intelligence then i would have to also ban human intelligence 07:14 < jrayhawk> humans are slow to takeoff 07:14 < jrayhawk> anti-nuclear-proliferation treaties are essentially a ban on human moral escape 07:15 < muurkha> moral escape? 07:15 < jrayhawk> taking off so hard that they can escape morality 07:16 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 07:16 < jrayhawk> e.g. nagasaki 07:22 < muurkha> what would it mean to escape morality? by "morality" do you mean "retribution"? 07:23 < jrayhawk> sure, that word works, i think 07:24 < fenn> his theory is that morality comes out of iterated prisoner's dilemmas, or something like that 07:24 < muurkha> wow 07:32 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 08:15 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Quit: Leaving] 08:41 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 08:41 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 09:01 < jrayhawk> i suppose i should write that down in more complete form than a bunch of slides thrown together in an hour 09:01 < kanzure> yes 10:08 < docl> perhaps it causes morality (to be conservative, I'll say is a dominating cause rather than entire cause) but the word "morality" usually describes people's moral opinions / what behavior it condones rather than what causes that. what's interesting about game theory (IPD being particularly interesting) is that it seems to be the main underlying system that shapes and selects which moral opinions can 10:08 < docl> successfully replicate across societies (in this context, societies are communication networks wherein respect is legible) 10:11 < docl> some morals seem to have more direct evolutionary causes like "don't forget to feed your kids" or "anything about how you treat kids is super important". I won't pretend IPD isn't relevant to that, just pointing to direct evolutionary influence as a competing explanatory/causative factor 10:11 < muurkha> jrayhawk: you are presumably aware that most people believe strongly that morality is just a matter of what you can get away with without retribution 10:11 < muurkha> uh, is *not* just 10:26 < docl> muurkha: I would agree with most people about that. morality involves principled opinions that don't change under pressure. however, my model is that there's nonetheless an underlying system of memetic evolution that causes moral opinions to be more/less common/tightly held, and this system does heavily correlate to "what you can repeatably get away with". like biological evolution, memetic evolution, 10:26 < docl> including its subset that causes morality to develop, is itself an amoral force (which, according to the complex of morals conserved in my brain, we should consistently try to account for, but not aspire to mimic uncritically) 10:35 < muurkha> docl: certainly the first part of that is correct, but it doesn't quite justify the second part 10:36 < muurkha> consider a possible universe in which good and evil are objectively real in the same way as death, cities, or pigeon mating rituals 10:37 < muurkha> and in which the humans have some ability to perceive them, but a limited and fallible ability, just like their ability to perceive astronomical phenomena, temperatures, and weights 10:38 < muurkha> in that universe there is still an underlying system of memetic evolution that causes moral opinions to be more/less common/tightly held, and this system does heavily correlate to "what you can repeatably get away with", but it's not an amoral force 10:39 < muurkha> what leads you to believe that we live in the universe you described rather than that one? 10:42 < muurkha> the idea that memetic evolution of morals in the humans is purely amoral seems to entail either that there is no objective morality, or that the humans can't perceive it (even through indirect effects, the way they perceive vitamin B6 deficiency or near-fatal doses of gamma rays), or that that perception has no effect on memetic evolution 10:43 < muurkha> is it possible you're just taking one of those premises as given, rather than because you see overwhelming evidence for it? 10:44 < docl> I think that an AI or human supervised moral evolution could have the characteristics you describe, but our universe seems unsupervised (at least, no supervision from outside humanity) 10:47 < docl> OK imagine a video game type universe (I've been reading too many litrpgs) that awards karma if you do good things and subtracts it if you do bad things. in that case, things that award karma are heavily influenced to be treated as moral by the residents. but hmmm. a video game universe that awards karma for doing bad things seems roughly as technically plausible. 10:47 < muurkha> how does that distinguish, for example, thermodynamics or the medical effects of heavy metal salts from the moral valence of abortion? 10:48 < muurkha> mercury chloride and refrigeration in our universe are presumably just as unsupervised as dilation and extraction 10:48 -!- TMA [tma@twin.jikos.cz] has quit [Ping timeout: 268 seconds] 10:51 < docl> well, the moral valence of abortion comes from a mix of IPD and more direct evolutionary pressure to value the experiences of kids as soon as possible in their development. you can dismiss the moral intuition that abortion is bad because it's too early to have an experience, or refuse to do so because it feels too close to the line that evolution made you hypersensitive to 10:57 < muurkha> well, certainly those things affect what the humans believe about the moral valence of abortion. but your argument is again begging the question 10:58 < muurkha> it thoroughly conflates the moral valence of abortion with what the humans believe about it, in a way that rests on the presumption that there is no objective morality 10:58 < docl> hmm. imagining a universe where there is an AI like overlord quietly assigning the karma points, again, we might wonder if the overlord is doing so incorrectly. what would make it non arbitrary, in principle? 10:59 < muurkha> try the same experiment with thermodynamics 10:59 < muurkha> a universe where there is an AI-like overhead quietly assigning temperatures 11:00 < docl> OK then in that case, if you find a way to repeatably get it to do so incorrectly you can build a perpetual motion machine 11:01 < muurkha> indeed, and the fact that you can't strongly suggests that there isn't such an AI-like overlord\ 11:02 < muurkha> but does that mean that things become hot or cold because of a mix of childhood education and more direct evolutionary pressure to value being at a comfortable temperature? 11:02 < docl> so then does the moral analogue, where it's incorrectly assigning karma points, suggest you could get away with evil? I reckon the karma needs to be visible to actually create the evil-getting-away-with machine though 11:04 < docl> not sure I'm grasping the complete thought you're going for 11:04 < muurkha> I don't know, because I'm sure that even if good and evil are objectively real, we don't have the level of knowledge about them that would be analogous to the law of conservation of energy 11:05 < muurkha> your reasoning leads you to believe that good and evil don't exist objectively. does it also lead you to believe that hot and cold don't exist objectively? If not, why not? 11:06 < docl> I'm not sure my reasoning implies that, I don't feel like I think good and evil don't exist objectively 11:07 < muurkha> well, as you pointed out above, if the residents of the universe have observable karma, that will tend to influence their moral system 11:08 < muurkha> so your argument as you've laid it out, that there is no such influence in this universe, seems to entail that either good and evil don't exist, or they can't be observed, or they can be observed but for some reason nobody cares 11:08 < muurkha> by "exist" I mean "exist objectively" 11:09 < docl> well, I think it's more a pattern-existence like the number line, rather than material-existence like a rock (which ranks higher in realness as I see things). but the existence becomes real via selection pressures including memetic ones including IPD / decision-consequences (since humans are materially real like rocks) 11:25 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:12:a24b:6c7d:9674] has joined #hplusroadmap 11:30 < docl> most numbers are too big / complex to instantiate in your brain. similarly, most morals are excluded by game theoretic pressures and/or evolutionarily conserved sensitivities. the bigness of the numbers you can't hold in your head is objectively true about them, as is the fact that such systemic factors cause you to reject incompatible morals. seems pretty objectively real to me, but maybe that's not 11:30 < docl> what people are arguing about 11:33 < muurkha> well, that's why I gave as examples death, cities, and pigeon mating rituals rather than rocks? 11:36 < muurkha> as things that objectively exist in the same sense as good and evil plausibly do 11:37 < docl> sounds like we agree then? what's the question? 11:46 < docl> "is the complex systemic process favoring morality amoral"? well, its outcomes are morals, but that doesn't mean it makes them using moral methods. for example, it could torture a person into thinking torture is bad. thus conserving the anti-torture moral, but not itself conforming to it. 12:22 < kanzure> "Method and apparatus for prevention of thermo-mechanical fracturing in vitrified tissue using rapid cooling and warming by persufflation" https://patents.google.com/patent/WO2014008488A1/en 12:22 < kanzure> gas perfusion cryonics^ 14:09 -!- TMM__ [hp@amanda.tmm.cx] has joined #hplusroadmap 14:10 -!- Jenda_ [~jenda@coralmyn.hrach.eu] has joined #hplusroadmap 14:12 -!- dustinm- [~dustinm@static.38.6.217.95.clients.your-server.de] has joined #hplusroadmap 14:16 -!- Netsplit *.net <-> *.split quits: TMM_, dustinm, superkuh, L29Ah, Jenda 14:19 -!- Netsplit over, joins: L29Ah 14:20 < muurkha> https://www.safe.ai/statement-on-ai-risk this abbreviated statement ("Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.") has gotten some of the AI-risk skeptics on board, including Aaronson and Norvig, but not LeCun 14:22 < muurkha> and Christiano 14:22 < muurkha> and this time it's broad enough Yudkowsky signed it too 14:25 -!- superkuh [~superkuh@user/superkuh] has joined #hplusroadmap 14:25 < muurkha> https://www.safe.ai/statement-on-ai-risk this abbreviated statement ("Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.") has gotten some of the AI-risk skeptics on board, including Aaronson and Norvig, but not LeCun 14:26 < muurkha> and Christiano 14:26 < muurkha> and this time it's broad enough Yudkowsky signed it too 14:26 < muurkha> (sorry for retransmission waste) 14:36 < kanzure> boring 14:59 -!- TMA [tma@twin.jikos.cz] has joined #hplusroadmap 15:14 < docl> re the patent, kind of sounds like they are just using persufflation, but in a hyperbaric chamber so it can work faster. combo of things that worked in the past 15:15 < docl> wowk/fahey's kidney success was hyperbaric as well 15:16 < docl> they're also working in stuff about H2S and/or CO to suppress metabolism in the process 15:16 < kanzure> aubrey was a little dismissive of this but i think we should be doing directed evolution on tissues and organs to select for genomes that lead to better outcomes on cryonics yield 15:16 < muurkha> nice 15:17 < docl> maybe so, if we can validate in cheap to make organoids say 15:19 < docl> especially for the purpose of extending the usefulness of engineered organs. maybe you could have cloned organs with engineered plasmids that do the work of making them cryo-compatible 15:20 < docl> and there was some success in cryoprotectant toxicity mitigation via directed evolution already. liver cells if I remember right 15:22 < kanzure> yeah we should do more of that 15:25 -!- flooded [~flooded@146.70.195.83] has joined #hplusroadmap 15:28 -!- _flood [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 265 seconds] 16:02 < docl> ah, it was mouse ESCs https://www.sciencedirect.com/science/article/pii/S0011224018302669 16:52 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has joined #hplusroadmap 16:54 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has quit [Client Quit] 17:28 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 250 seconds] 18:18 -!- deltab [~deltab@user/deltab] has quit [Ping timeout: 240 seconds] 18:28 -!- deltab [~deltab@user/deltab] has joined #hplusroadmap 18:34 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 18:38 -!- flooded [~flooded@146.70.195.83] has quit [Ping timeout: 265 seconds] 18:57 -!- TMM__ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 18:57 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 20:00 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:12:a24b:6c7d:9674] has quit [Quit: Leaving] 21:40 < docl> I wonder if it's worth trying to DIY nucleobase production? put chlorella powder in a ball mill to grind it down and break down the DNA, then use alcohol to dissolve. then run through columns of powders doped with nucleobases complementary to the one you want to extract (to slow it down so it doesn't leave the column at the same time the others do) 21:43 -!- flooded [~flooded@154.47.25.194] has joined #hplusroadmap 21:47 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 240 seconds] 22:31 -!- faceface [~faceface@user/faceface] has quit [Remote host closed the connection] 22:32 < fenn> strawberry DNA extraction is an easy science class demo 22:37 < fenn> you're not going to break a strand of DNA into individual nucleobases with a ball mill. you'll need a few different nuclease enzymes 22:39 < docl> chlorella seems to have somewhat small cell size / big genome, and isn't hard to get. 20 million pairs per cell or 4x10^7 monomers, 2x10^-11 grams per cell as dry mass, so let's see... if those are 300 g/mol, then with avogadro's, 23 - 7 = 16 so something like 10^16 cells worth per 300g of nucleobase. that should be about 10^5 g. so then there should be something like 1 to 3g of nucleobase per kg 22:39 < fenn> since they aren't constrained in space the way they are in a double helix, you won't get good selectivity or even much binding from complementary bases on a resin 22:40 < fenn> i really don't think getting DNA is the problem 22:40 < docl> hmm. they are close in molecular weight so it could be hard to separate them centrifugally 22:42 < docl> why would that stop them from interacting? 22:42 < fenn> when planning out a series of organic chemistry synthesis reactions, you don't necessarily take the straightforward route of "first, make an X, and then add a Y to it to get X-Y" because it's hard to control where the Y ends up 22:42 < fenn> so i dunno if it's even useful to have natural nucleobases as a starting point 22:43 < fenn> we're perpetually low on chemists around here 22:44 < docl> I was originally thinking a set of all possible 4-mers would be useful. you could put them in a bundle of tubes and eject droplets. but you do need to be able to sort them 22:44 < docl> 256 isn't that big, just a 16x16 array 22:46 < docl> first one could be bonded to resin on 5', then for the successive steps you could photocleave the 5' while it's being ejected. 3' could just be pH deprotected like normal 22:48 < docl> probably could just get the 256 set of all possible 4-mers produced commercially 22:53 < docl> if you have a bunch of assorted dna fragments that aren't all predetermined length, getting the 4-mers (or other specific length) out should be relatively doable with powder columns, since length affects how fast they move. 22:56 < docl> but to use them as inks, you need purified versions of each 4-mer. so using column separation with bases that slow things down based on whether they interact is a partial solution to this. you could perhaps also put a bulky endcap on them so e.g. the A's on 5' slow it down more (interact more with the T's in the column) than the ones closer to 3' 22:58 < docl> (or it might be TLC. some kind of solid phase separation technique.) 23:00 < docl> (standard disclaimer: I'm self taught, lots of gaps, someone with real expertise please criticize) 23:06 < fenn> we talked about similar schemes before with microfluidic droplet storage libraries, but the n-mers were synthesized one pool at a time by traditional phosphoramidite minicolumn synthesis 23:06 < docl> might need enzymes, sure. could add subtilis for the dnase? I'm not clear on why a ball mill wouldn't work without it though. potentially solves breaking the cell walls at the same time as breaking up the DNA. maybe relatively high temperature, since we're not trying to keep the DNA intact. I assume nucleobases themselves are going to be tougher since they are smaller, but then again it's better if 23:06 < docl> there are fewer parameters to control 23:07 < fenn> there's no reason to think it will break the molecule exactly where you want it to break 23:08 < fenn> it might just add random bits of junk to the molecule instead 23:09 < fenn> high pressure tends to favor reactions that reduce the number of species, because they take up less room as a single molecule than as many 23:09 < fenn> s/species/molecules/ 23:11 < docl> ah, so you'd perhaps lose nucleotides as they convert to other more compact molecules 23:12 < docl> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5942386/ --- Log closed Wed May 31 00:00:16 2023