--- Log opened Fri Jul 14 00:00:58 2023 00:21 < fenn> more like a capacitor. the mitochondria is what does this in eukaryotes, not the whole cell 00:25 < hprmbridge> Eli> For the cell, I was thinking there were potassium and calcium signaling channels. Which is different from the electrochemical gradient of protons in the mitochondria. 01:14 < fenn> i don't think there's any redox chemistry going on 01:15 < fenn> there's the electron transport chain in the mitochondria of course, but that doesn't touch metal ions 01:21 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 03:11 -!- AMG [ghebo@2605:6400:c847:1449::9441] has quit [Read error: Connection reset by peer] 03:36 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 03:39 -!- AMG [ghebo@2605:6400:c847:1449::9441] has joined #hplusroadmap 04:17 -!- nsh- is now known as nsh 04:45 -!- sivoais_ [~zaki@199.19.225.239] has quit [Ping timeout: 245 seconds] 04:56 < docl> so is the plan for everyone who wants in on the project to make like 50 billion copies of themselves and travel as a group to all the stars in the galaxy for the next 200k years? then merge with their original self whenever they get home? I guess you'd get a constant stream of new info/memories since they get home at different times 05:09 -!- sivoais [~zaki@199.19.225.239] has joined #hplusroadmap 05:59 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:edaf:12df:ec3c:5001] has joined #hplusroadmap 07:30 -!- specbug [~specbug@2401:4900:1f28:4c4a:e4a9:d80a:7df5:a0a3] has joined #hplusroadmap 07:31 -!- specbug_ [~textual@2401:4900:1f28:4c4a:e4a9:d80a:7df5:a0a3] has joined #hplusroadmap 07:46 -!- specbug [~specbug@2401:4900:1f28:4c4a:e4a9:d80a:7df5:a0a3] has quit [Quit: Client closed] 07:48 -!- specbug_ [~textual@2401:4900:1f28:4c4a:e4a9:d80a:7df5:a0a3] has quit [Quit: Textual IRC Client: www.textualapp.com] 08:14 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 08:28 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 08:43 < heath2> oh neat, the irc client is still running. hi 👋 08:52 < superkuh> This is the way. 09:12 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 09:12 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 09:15 -!- Giom[m] [~guillaume@2001:470:69fc:105::1:319b] has quit [Remote host closed the connection] 09:35 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 10:50 < hprmbridge> alonzoc> Have been meaning to join this community, need to fix my IRC client at some point 15:22 < hprmbridge> jkhales> Nice interview https://open.spotify.com/episode/46xyWgjARK78LvN0yf753H with Ora. 15:22 < hprmbridge> jkhales> 15:22 < hprmbridge> jkhales> Ora is a early stage longevity startup that is looking to discover small molecule interventions by doing massive throughput testing on nematodes. They are hoping to get to 1 million a year. 15:22 < hprmbridge> jkhales> 15:22 < hprmbridge> jkhales> I like this approach as it's very empirical and I have a soft spot for robotics. 15:22 < hprmbridge> jkhales> 15:22 < hprmbridge> jkhales> The negative though is that the real expensive part is human clinical trials. The big value add for preclinical, aside from idea generation and speed, is the probability the effect will translate to a human. Right now it sits at around 10%, even after testing in mammals. This extreme phenotypic/non-rational approach seems like it will translate much worse than 10%. 15:28 < hprmbridge> kanzure> large scale automation on animals is a good strategy in my book. 15:29 < hprmbridge> alonzoc> I'm not completely convinced on nematodes on a target, but none the less looks good. Microfluidic organoid models for drug testing a particularly interesting but still a ways out for simulating the full mammalian system 15:30 < hprmbridge> kanzure> we can do germline interventions, those are easy compared to small molecule search, so why not do that. 15:31 < hprmbridge> alonzoc> Well for one harder to make a profit off and germline mods prolly will get shot down by the FDA and analogous agencies 15:31 < hprmbridge> kanzure> anti-aging won't get blanket approval anyway 15:34 < hprmbridge> alonzoc> True, but that's what DIY biolabs are for. Issue is that cutting edge research involving mass searches aren't really practical in small independent labs or DIY labs, so the research is focused on small molecule stuff from what I can tell. Maybe there's some secret government genetics research by the USA or China to make the top brass immortal but aside from that possibility antiaging isn't gonna 15:34 < hprmbridge> alonzoc> get the big bucks 15:34 < hprmbridge> alonzoc> However microfluidics does massively lower the barrier for entry 15:34 < hprmbridge> kanzure> microfluidics is difficult- easier to just have a bunch of pipetting robotics. 15:36 < hprmbridge> kanzure> anyway, why not germline? you don't need to do an impossible search for this intervention. 15:36 < hprmbridge> alonzoc> Ehh there's some good DIY manufacture tricks, still got to try most out but there's a neat ABS based method see: ESCARGOT 15:36 < hprmbridge> kanzure> plus, adults may already be too damaged for all we know 15:43 < hprmbridge> lemmy> unless we're willing to create a legion of progeriacs, I don't think germline editing is going to be some golden fleece for anti-aging 15:46 < hprmbridge> alonzoc> Now what would be the golden fleece would be a contagious viral vector that did germline modification. However that would be logistically and ethically problematic 15:47 < hprmbridge> lemmy> sure, create that and learn what it feels like to have 1 billion angry humans hunting you down 15:48 < hprmbridge> alonzoc> I did say ethically problematic, now something like small molecules or some epigenetic therapy would likely be a good target for research atleast at first 15:50 < hprmbridge> alonzoc> Also I'd rather not be known for eugenicist bioterrorism 15:52 < hprmbridge> lemmy> I have wondered what doing pulsed entinostat & vismodegib would do for epigenetics in aging 15:53 < hprmbridge> lemmy> entinostat pulse to loosen chromatin reg, then a dose escalation of vismodegib to shut down TGFb for a bit systematically and let NK cells clear out senescent cells 15:54 < hprmbridge> lemmy> main Q there is whether you'd get a useful GF rebound after to repop damanged niches with "less fucked up" stem cells 15:55 < hprmbridge> alonzoc> Sounds like an interesting idea, need to look at entinostat but know Vismodegib can be unpleasant 15:56 < darsie> https://youtu.be/eYloDIO1kdg?t=131 15:57 < Muaddib> [eYloDIO1kdg] Fly Brain Fully Decoded and Mapped in 3D (17:03) 15:57 < hprmbridge> alonzoc> I'm looking forward to the research on Metformin being done though 15:57 * L29Ah inserts metformin 15:58 < hprmbridge> alonzoc> At the very least they're trying to create a standardized way to evaluate anti aging drugs 16:17 < hprmbridge> jkhales> do you know of any interesting reads on this? Super curious about mammalian organoid testing. 16:18 < hprmbridge> alonzoc> https://pubmed.ncbi.nlm.nih.gov/35333271/ 16:18 < hprmbridge> alonzoc> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8283929/ 16:19 < hprmbridge> alonzoc> These are what's currently in my tabs to hand 16:20 < hprmbridge> alonzoc> Research for an argument/discussion I'm having with a friend on the possibility of chimeric cyborg abominations, as an alternative to traditional robotict 16:23 < hprmbridge> jkhales> is the argument about feasability or moral revulsion? 😄 16:23 < hprmbridge> alonzoc> Feasibility, don't worry I decided against studying medicine because of my disgust sensitivity. But neuroscience and genetics are still an interest 16:24 < hprmbridge> alonzoc> So I'm not in a position to start working towards chimeric murder cyborgs 16:24 < hprmbridge> alonzoc> Could be cool though in a dystopian sci-fi way 16:25 < hprmbridge> jkhales> I'm more worried about organoids and robotic experimentation not moving fast enough, than I am about any dystopian sci-fi 🙂 16:27 < hprmbridge> alonzoc> Tru, admittedly post-covid biotechnology has really started to pick up in pace 16:29 < hprmbridge> alonzoc> Tbh I find it funny that most people think transhumanists are "flesh is weak embrace the certainty of steel" but most transhumanists I know all think biology is super neat and a good inspiration if a little messy 16:34 < muurkha> steel rusts 16:39 < hprmbridge> alonzoc> Nanotechnological diamondoid cells are forever 16:47 < hprmbridge> kanzure> animals are good containers for brains. more scalable than electrical interfaces. 16:48 < hprmbridge> kanzure> are you focused on non-aging, or cortical organoids? 16:49 < hprmbridge> jkhales> I'm most interested in organoids in facilitating medicinal testing that has a high translation rate to humans. Most importantly for anti-aging. 16:51 < hprmbridge> alonzoc> I'm more focused on AI stuff mostly through the lens of singular learning theory. I just look at brains for inspiration, and just read anything I think is interesting 16:52 < hprmbridge> alonzoc> (which leads to recursive exploration of citations) 16:53 < hprmbridge> kanzure> what is aubrey's current plan exactly? is it still clean up damage? or is it chug yamanaka factors? 16:53 < hprmbridge> kanzure> there was one about replacing all the stem cells in the human body. is that one still being pursued? 16:55 < hprmbridge> kanzure> I was just having a beer with him, I should have took notes. I assume though, broadly, that the plan has not changed much. 16:56 < hprmbridge> alonzoc> Aubrey de grey? 16:57 < hprmbridge> kanzure> yes. 16:58 < hprmbridge> alonzoc> Neat, looked at some of his stuff in passing guys doing interesting work 16:59 < juri_> so, i know there's not a lot of love for my 3d printing stuff here, but, my talk was accepted for chaos communication camp. https://pretalx.c3voc.de/chaos-communication-camp-2023/talk/review/TSTWWQ37BBEXKFWJRHMNQJWQRGDKBGCA 17:00 < hprmbridge> lemmy> Aubrey's plan seems to have been hoping that senolytics or anti-lipofuscin drugs would get made by someone else 17:00 < juri_> I'll be covering the project in general, as well as some discoveries on parallelizing slicing algorithms. 17:01 < hprmbridge> jkhales> Aubrey's pursuing robust mouse regeneration in his new LEV foundation. 17:02 < hprmbridge> jkhales> He's implied publicly that yamanaka factors/reprogramming is over invested in. 17:03 < hprmbridge> kanzure> there are other techniques for reprogramming that have not received as much attention. 17:03 < hprmbridge> alonzoc> Got any good papers? 17:04 < hprmbridge> kanzure> I am very ill today but you can dig here https://diyhpl.us/~bryan/papers2/ 17:05 < hprmbridge> kanzure> and https://diyhpl.us/~bryan/papers2/longevity/ 17:05 < hprmbridge> kanzure> https://diyhpl.us/~bryan/papers2/neuro/ etc.. 17:05 < hprmbridge> kanzure> happy to find more exact links in my library once I am not actively vomiting (tomorrow perhaps) 17:06 < hprmbridge> kanzure> oh also https://diyhpl.us/~bryan/papers2/ai/ 17:06 < hprmbridge> kanzure> https://diyhpl.us/~bryan/papers2/microfluidics/ 17:06 < hprmbridge> alonzoc> Oof, well good paper stockpiles are always appreciated I need to shape mine up. I basically have a big content addressed flat folder of pdfs (named so each pdf is it's hash.pdf), and I used to use yacy to index them all 17:07 < hprmbridge> alonzoc> but yacy was a bloated and annoying mess so I dropped it 17:08 < hprmbridge> kanzure> separately I have conference notes on the hplusroadmap wiki https://diyhpl.us/wiki/transcripts/ 17:09 < hprmbridge> kanzure> you may also be interested in https://diyhpl.us/wiki/genetic-modifications/ or https://diyhpl.us/wiki/gene-editing/ 17:09 < hprmbridge> alonzoc> Yeah, I read a lot of the wiki before joining the chat. Good stuff mostly found you a few months back through nanorex stuff 17:10 < hprmbridge> kanzure> ah. some of the hplusroadmap members are working on a new molecular nanoengineering system & they want to advance to implementation. 17:10 < hprmbridge> kanzure> (of physical atoms I mean) 17:11 < hprmbridge> kanzure> cc @mechadense 17:11 < hprmbridge> alonzoc> A friend of mine and I were discussing molecular self assembly and molecular nanotechnology, and bemoaning how sad it was that the whole thing died off. There was also discussions on a QED sim integrated with a parralel SMT solver , but that didn't go far 17:12 < hprmbridge> alonzoc> That'd be super awesome, I was looking at current precisions of off the shelf components and they're getting close to the requirements I saw from the 00s 17:13 < hprmbridge> alonzoc> It only takes one small nanofactory 17:13 < hprmbridge> kanzure> subangstrom precision is doable with certain interferometry 17:13 < hprmbridge> kanzure> well, cells are already nanofactories too 17:13 < hprmbridge> alonzoc> Yeah but I don't wanna build with proteins 17:14 < hprmbridge> kanzure> you might be stuck with proteins for a while haha 17:14 < hprmbridge> alonzoc> And getting bio to do what you want is expensive and finicky 17:14 < hprmbridge> alonzoc> Gimme a machine that pumps out molecular wang tiles that react to assemble and I'll be happy 17:15 < hprmbridge> alonzoc> (I may be on track for disappointment for a while) 17:16 < hprmbridge> nmz787> I've posted some SMT RNA solver stuff in here before 17:16 < hprmbridge> nmz787> I wouldn't say it's died tho 17:17 < hprmbridge> nmz787> I'm working on eventually applying smt/sat solver to silicon fab design rules, and probably also micro fluidic design eventually 17:17 < hprmbridge> alonzoc> Or well diamondoid mechanosynthesis and stuff like that quickly got less funding. Like "Nanotech" that's photolithography and shit are super useful but not the nanotech of drexler's dreams 17:19 < hprmbridge> kanzure> @jkhales sounds like AlonzoC found his way here from nanoengineer. what about you? 17:38 < hprmbridge> jkhales> I think I saw you interacting with @pmetzger on twitter and then click on your pinned tweet: https://twitter.com/kanzure/status/1615359557408260096?s=20 🙂 17:40 < hprmbridge> kanzure> cool. 17:44 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 17:45 -!- stipa [~stipa@user/stipa] has quit [Ping timeout: 245 seconds] 17:45 -!- stipa_ is now known as stipa 17:50 < hprmbridge> alonzoc> Oh for anyone interested in AI who likes maths https://shaoweilin.github.io/ is a goldmine of a blog 17:50 < fenn> cover story of science magazine about cryopreservation: https://www.science.org/content/article/how-to-deep-freeze-entire-organ-bring-it-back-to-life 17:51 < hprmbridge> alonzoc> The whole mutable vs latent processes bit alone is worth a read 17:51 < hprmbridge> alonzoc> @jkhales you might be interested 17:52 < fenn> it's not even greek to me. at least i sort of understand greek 17:53 < fenn> now i can get chatgpt to translate mathlish to english 17:54 < hprmbridge> alonzoc> Lol, it's not really maths unless 95% of the time you feel like you're smashing your head against a brick wall 17:54 < fenn> "A (Hochschild) cohomological view of relative information must be motivic!" right. what percentage of the general academic population would understand that sentence 17:55 < fenn> alonzoc that is a cultural problem with the people who dominate the field, not a problem with the universe 17:55 < hprmbridge> alonzoc> Well that was april first. But yeah tbh I don't fully understand his motivic information theory stuff, I'm still learning my Algebraic Geometry 17:55 < hprmbridge> jkhales> looks very interesting! 17:56 < fenn> in "Lockhart's Lament" he describes math as "the art of explanation" 17:56 < hprmbridge> alonzoc> Ehh there's a fundmental degree of frustration in doing maths, beyond just learning. I've wasted days on attempts to derive a natural prior/regulariser for singular models and failed repeatedly 17:57 < hprmbridge> alonzoc> But pedagogy can be improved 17:59 < hprmbridge> alonzoc> However then you feel like you've made progress and it's the best feeling ever. The worst feeling ever is when you spend like 5 pages deriving some stuff and the result is useless, I had that when I was tinkering with corrigiblity. Oh that reminds me I've actually got something interesting I made, it's seriously unpolished 18:03 < hprmbridge> alonzoc> I originally posted it in lesswrongs discord in August but no one really could usefully talk about it appeared https://cdn.discordapp.com/attachments/1064664282450628710/1129578946221051976/Main_6.hs 18:03 < hprmbridge> alonzoc> ```I thought some lesswrongers might appreciate this, well I was re-reading https://arxiv.org/abs/1508.04145 for the 100th time and was reminded a while back that I read the reflective oracle was limit computable so I decided to implement a hacky reflective oracle in haskell. 18:03 < hprmbridge> alonzoc> So here it is, it doesn't handle halting yet, but if you can bound the runtime of the oracle calls you can use a Levin Search style doubling trick to make it converge even in the light of a non-halting machine in the query (running out of runtime would just be recorded as the machine returning false to get standard reflective oracle behaviour). 18:03 < hprmbridge> alonzoc> I hope I can implement a version of EDT that can solve the 5&10 problem using this and with a UDT-esque interface to it's universe, as I found the implementation of CDT in the paper a tad hacky. 18:03 < hprmbridge> alonzoc> Basically it works by making code in the monad output a stream (hacked together using List) and the oracle output is determined by bayesian updating a beta distribution using past samples and choosing it's output for this sample based on the probability it assigns to the machine it's being asked about that it's probability of outputting True is greater than the query probability that it was given. 18:03 < hprmbridge> alonzoc> ``` 18:04 < hprmbridge> kanzure> there is a lesswrong discord? is anyone there working on transhumanist engineering projects? 18:04 < fenn> why would you expect it to be any different from lesswrong the web forum 18:04 < hprmbridge> alonzoc> I don't really frequent it it's mostly anti-AI cultishness and "rationality" 18:04 < hprmbridge> kanzure> dunno, just asking fenn 18:04 < hprmbridge> kanzure> yeah... 18:05 < hprmbridge> alonzoc> Like I care about saftey but they're not making much headway 18:05 < hprmbridge> alonzoc> Imo they abandoned their only useful lead in around 2014 18:05 < hprmbridge> kanzure> they should learn chemistry, biology, genetics, or make new silicon fabs, work on aging, something. 18:05 < hprmbridge> alonzoc> They've got some good stuff in the technical posts 18:06 < hprmbridge> kanzure> or work on AI. but most of the posts aren't about why you should develop your own AI skillz. 18:06 < fenn> i would suggest not publishing exclusively on discord. it's absolutely terrible for discoverability 18:06 < hprmbridge> alonzoc> True, I am planning to write papers 18:06 < hprmbridge> alonzoc> Or a blog or something 18:07 < hprmbridge> alonzoc> Also I refuse to actually join lesswrong 18:07 < hprmbridge> alonzoc> Then there's no going back 18:07 < fenn> kanzure they discourage "AI capabilities research" in general 18:07 < fenn> a blog is good 18:08 < hprmbridge> alonzoc> It's mostly used as a smoke screen imo to try and prevent peasants getting access to AI imo 18:08 < fenn> there are many true believers 18:09 < hprmbridge> alonzoc> A lot of the stuff in the opensource AI field feels actively designed to fuck it over tbh I had some interesting discussions with a guy who did a ton of work on positional coding that just got scrapped for rotary when rotary is actually bad preventing window rolling and not allowing for generalisation to higher context sizes 18:09 < hprmbridge> alonzoc> He was seriously pissed about it 18:10 < hprmbridge> alonzoc> Yeah the true believers drive me up the wall. You have issues with AI saftey.... Then work on it. Practically. 18:10 < fenn> they aren't concerned with practical safety 18:11 < hprmbridge> alonzoc> Don't hobnob around talking BS philosophy and making up excuses. Like only excuse they brought up that I find slightly convincing is demons and I only think they'd be even potentially an issue in auto constructive stuff where the learning process is us learnt 18:12 < fenn> a church may give food to the poor, but they're only doing it to get people in the door so they can save their souls. it's the same thing here with "rationality" and "safety" 18:12 < fenn> the real goal is to get you to join the search for the holy grail of "alignment" which probably can't exist 18:14 < hprmbridge> alonzoc> Basically the thought is training sufficiently large models might induce subagents even in non agentic tasks because agents are a useful structure. So like maybe the learning algorithm might discover that MCTS on a world model is a useful way to model some thing and then you end up with that subagent "waking up". This is slightly more likely if your environment has mutual information with the 18:14 < hprmbridge> alonzoc> notion of an agent as then the conditional complexity is lower reducing the effective cost you'd pay to a regulariser 18:14 < hprmbridge> alonzoc> Alignment I think is a fool's errand but I think corrigiblity will be a practical and sufficient stop gap to mitigate existential risk 18:15 < hprmbridge> alonzoc> Along with some slight modification on value learning, there's a good paper by hutter on a model of value learning that stops wireheading 18:19 < hprmbridge> alonzoc> But basically from my attempts to figure out safety the best approach is to figure out a good model of action under moral uncertainty. Then the AI won't turn everything to paperclips because it properly tracks the extrapolation uncertainty beyond the domain it's reward signal was given in. 18:19 < hprmbridge> alonzoc> Paper clips = good 18:19 < hprmbridge> alonzoc> Human life = not sure 18:19 < hprmbridge> alonzoc> best not to kill humans to make paper clips then because it could be really bad 18:19 < hprmbridge> alonzoc> Formalising it nicely however is problematic you end up having to drop axioms that seem intuitive 18:20 < hprmbridge> alonzoc> And you can get a reflectively stable off switch from moral uncertainty more or less aside from a slight issue with non unique fixed points that makes me doubt 18:21 < hprmbridge> kanzure> glad you refused to join. although I do wonder if I should have argued more in 2009-2011 to them that they should also build things. 18:21 < hprmbridge> alonzoc> I'm an e/acc not EA lol 18:22 < hprmbridge> alonzoc> Tbh it's mostly that I can't stand to hear the constant worship of yud who afaik hasn't done anything useful ever 18:22 < hprmbridge> alonzoc> (I don't count shit fanfic as having utility) 18:23 < hprmbridge> kanzure> https://youtu.be/nXARrMadTKk 18:23 < Muaddib> [nXARrMadTKk] The Ballad of Big Yud (Eliezer Yudkowsky) (2:13) 18:26 < L29Ah> 03:07:37] alonzoc> Then there's no going back 18:26 < L29Ah> why not? 18:28 < hprmbridge> alonzoc> I'll be on big Yuds radar then ofc, he'll send effective altruist hit squads after me to stop capabilities research. nah it's mostly humourous hyperbole 18:29 * L29Ah spent some time on the forum but given up on it after my post about rational ejaculations was downvoted into oblivion and then removed for no reason, i went to lesswrong gatherings for a few years and got plenty of nice people to chat with there (and my only long-term partner), joined ##lesswrong and was quickly banned while trying to figure out the meaning of "akrasia", was banned from lesswro 18:29 < L29Ah> ng slack for wearing a Raelism symbol as an avatar 18:29 < L29Ah> fun times 18:30 < hprmbridge> alonzoc> Lol, I went to an EA meet-up once only one other person showed up lol 18:30 < hprmbridge> alonzoc> Good conversation, they weren't really technical tho 18:30 < L29Ah> and the rhetorical/cognitive biases they like to study are actually useful knowledge 18:30 < hprmbridge> alonzoc> Oh yeah, but tbh I can get that from actual philosophy 18:31 < L29Ah> or better the corresponding wikipedia category 18:31 < hprmbridge> alonzoc> However I do agree with them on Bayesian epistemology being the one true epistemic framework. It's just I don't pretend to actually think in probabilities 18:32 < L29Ah> well you can eyeball your priors with some probability ;) 18:32 < hprmbridge> alonzoc> The problem of priors that philosogosrs wave about is mostly moot as any universal prior works you just get slightly different constants on your convergence time 18:32 < L29Ah> sadly it's not very useful when papers aren't formulated in the bayesian framework 18:33 < hprmbridge> alonzoc> Like fisher priors are also neat but sadly fail for singular models 18:33 < hprmbridge> alonzoc> Fisher priors are basically objective priors for any model class where the parameters sit in a manifold 18:34 < hprmbridge> alonzoc> On also there is a slight issue in Bayesian epistemology about logical omniscience and beliefs causally connected to your actions 18:37 < hprmbridge> alonzoc> But like the former imo is moot as it's the "ideal" reasoning I care about not what actually happens, and the latter can be remedied by a bunch of tricks. And both are fixed by the "logical inductor" paper even though it imo overuses the metaphor of a betting market 18:38 < hprmbridge> alonzoc> Seriously I'm not sure what idol lesswrongers worship more, HPMoR or betting markets 18:39 < L29Ah> the Ω 18:39 < hprmbridge> alonzoc> Also this is beautiful 18:48 < muurkha> L29Ah: rational ejaculations? 18:50 < L29Ah> muurkha: i don't remember the details, but i asked what would be the best ejaculation frequency to have, citing a well-received question on some other biological routine there 18:51 < hprmbridge> alonzoc> . Okay 18:51 < hprmbridge> alonzoc> Someone's been reading cryptonomicon recently 19:01 < muurkha> L29Ah: that doesn't seem like an unreasonable question, and certainly many belief systems have opined on it 19:02 < hprmbridge> alonzoc> Yeah it's a fairly standard biological function and has a lot of effects on the rest of the body and the mind 19:02 < hprmbridge> alonzoc> Figuring out the optimal policy for self regulation shouldn't be taboo 19:03 < hprmbridge> alonzoc> I'd think it's something lesswrongers would be pro finding out they talk about other stuff people think is taboo 19:06 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:edaf:12df:ec3c:5001] has quit [Quit: Leaving] 19:16 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 252 seconds] 19:26 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Quit: Leaving] 20:07 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 20:11 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 250 seconds] 21:03 < fenn> AI ethics https://www.smbc-comics.com/comics/1689349202-20230714.png 22:45 -!- stipa [~stipa@user/stipa] has quit [Quit: WeeChat 3.0] 23:16 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 23:20 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 245 seconds] 23:38 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 23:38 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap --- Log closed Sat Jul 15 00:00:59 2023