--- Log opened Mon Oct 24 00:00:57 2022 00:43 -!- mirage33515 [~mirage335@2a01:4f8:120:2361::1] has joined #hplusroadmap 00:46 -!- mirage335 [~mirage335@2a01:4f8:120:2361::1] has quit [Ping timeout: 244 seconds] 00:51 -!- mirage3351559 [~mirage335@2a01:4f8:120:2361::1] has joined #hplusroadmap 00:54 -!- mirage33515 [~mirage335@2a01:4f8:120:2361::1] has quit [Ping timeout: 244 seconds] 01:47 -!- spaceangel [~spaceange@ip-94-113-214-149.bb.vodafone.cz] has joined #hplusroadmap 03:04 -!- darkdarsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 03:17 -!- spaceangel [~spaceange@ip-94-113-214-149.bb.vodafone.cz] has quit [Ping timeout: 276 seconds] 03:19 -!- spaceangel [~spaceange@ip-94-113-214-149.bb.vodafone.cz] has joined #hplusroadmap 04:27 -!- luna__ [~luna@82-132-230-25.dab.02.net] has joined #hplusroadmap 04:29 -!- luna__ [~luna@82-132-230-25.dab.02.net] has quit [Read error: Connection reset by peer] 04:29 -!- luna_ [~luna@user/luna/x-4729771] has quit [Ping timeout: 252 seconds] 04:31 -!- luna__ [~luna@2a01:4c8:46:b1ff:446a:1ee:d54c:725a] has joined #hplusroadmap 06:08 -!- alexbfi [~alexbfi@dzy9b2yypwzmq353lw3bt-3.rev.dnainternet.fi] has quit [Remote host closed the connection] 06:11 -!- alexbfi [~alexbfi@dzy9b2yypwzmq353lw3bt-3.rev.dnainternet.fi] has joined #hplusroadmap 06:11 -!- alexbfi [~alexbfi@dzy9b2yypwzmq353lw3bt-3.rev.dnainternet.fi] has quit [Remote host closed the connection] 06:33 -!- luna__ is now known as luna_ 06:34 -!- luna_ [~luna@2a01:4c8:46:b1ff:446a:1ee:d54c:725a] has quit [Changing host] 06:34 -!- luna_ [~luna@user/luna/x-4729771] has joined #hplusroadmap 06:34 -!- luna_ is now known as is 06:34 -!- is is now known as luna_ 06:38 -!- mirage3351559 [~mirage335@2a01:4f8:120:2361::1] has quit [Quit: Client closed] 06:38 -!- mirage3351559 [~mirage335@2a01:4f8:120:2361::1] has joined #hplusroadmap 08:28 -!- juri_ [~juri@84-19-175-179.pool.ovpn.com] has quit [Ping timeout: 260 seconds] 08:55 -!- mirage335155998 [~mirage335@2a01:4f8:120:2361::1] has joined #hplusroadmap 08:59 -!- mirage3351559 [~mirage335@2a01:4f8:120:2361::1] has quit [Ping timeout: 244 seconds] 08:59 -!- juri_ [~juri@79.140.114.58] has joined #hplusroadmap 09:31 -!- archels_ [~neuralnet@static.65.156.69.159.clients.your-server.de] has joined #hplusroadmap 09:31 < archels_> https://www.tiktok.com/@bsmachinist/video/7157587293451062574 09:41 < kanzure> use proxitok if you must use tiktok 09:44 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has quit [Quit: Bridge terminating on SIGTERM] 09:49 * L29Ah uses mpv 09:49 -!- lsneff [~lsneff@2001:470:69fc:105::1eaf] has joined #hplusroadmap 09:54 -!- juri_ [~juri@79.140.114.58] has quit [Read error: Connection reset by peer] 09:57 -!- juri_ [~juri@79.140.114.58] has joined #hplusroadmap 10:03 -!- juri_ [~juri@79.140.114.58] has quit [Ping timeout: 252 seconds] 10:04 -!- juri_ [~juri@84-19-175-179.pool.ovpn.com] has joined #hplusroadmap 10:23 < kanzure> why is pmetzger focused on hard vacuum instead of aqueous phase molecular nanotechnology? 10:31 -!- livestradamus [~quassel@user/livestradamus] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 10:32 -!- livestradamus [~quassel@user/livestradamus] has joined #hplusroadmap 10:33 < L29Ah> because diamond is strong, and meat is weak 10:38 < nsh> never saw diamond mine a human 10:39 < nsh> nor hew one into a gem 10:42 < docl> 16:34 < drmeister> We are scheduling a meeting with the Drexler folks later this week - I'll keep you abreast. 10:42 < docl> from #clasp 10:42 < docl> hope someone records it 10:43 < docl> would the drexler folks in question be you maaku? 10:45 < kanzure> docl: what protein nanostructures would you make if you were assured that you could make new proteins very cheaply? 11:07 < docl> bunch of answers at the high level. low hanging fruit seems to be nanopores for drug delivery, catalysts for many different applications, membranes for mineral separation and better dialysis / kidney analogue implants, antibodies wrt whatever viruses are of concern (covid, latest cold/flu variants), virus detection. skin based biomedicine would be interesting (with access via sweat pores) due to low 11:07 < docl> invasiveness. dental health augments e.g. plaque breakdown / enamel formation. AGE cleaving and other SENS strategues. intracellular trehalose synthesis / import for cryoprotection. 11:10 < kanzure> no nanofactories? 11:13 < docl> well yes nanofactories but you do need to build up a library of vacuum tolerant components for that. I think you could design solution phase stuff to make diamond/alumina/silicon with a bit of additional processing. maybe handle the positioning on a monolayer of something that can be freeze dried 11:16 < docl> also you could make spiroligomer building blocks from protein, and a vast array of other monomers, and assemble them into complex polymers which can work as solution phase machinery (or be designed more for machine phase) 11:18 < kanzure> what do you mean proteins as spiroligomer monomers? 11:20 < docl> the monomers used to make spiroligomers are amino acid based, so you could make them from regular amino acids with the right proteins serving as catalysts. there is a bulk synthesis process for them already, but a biosynthetic route could be cheaper 11:21 < kanzure> oh, but what about just printing out amino acids to make spiroligomers? 11:24 < docl> I think they are different molecules, so you'd need to alter them to make bis-amino acids from regular ones. but yeah if you print them I think you can do the stepwise reactions to make spiroligomers at small scale. needs some steps like deprotection though 11:28 < docl> https://sci-hub.se/10.1016/j.tetlet.2016.09.032 11:43 < docl> so per that paper they have a process to make them from trans-4-hydroxy-L-proline which is naturally occurring precursor to the 20 essential alpha-amino acids (so already biosynthesizable). the process might itself be adaptable to an inkjet type printing setup 11:47 < docl> .t https://www.ncbi.nlm.nih.gov/pmc/articles/PMC507883/ 11:47 < saxo> POSaM: a fast, flexible, open-source, inkjet oligonucleotide synthesizer and microarrayer - PMC 11:50 < kanzure> yeah i guess i really mean keeping everything solution phase, even the nanofactories 12:01 < docl> yeah, that's an approach with legs IMO. protein is already engineerable and spiroligomers let you do things needing more heat/structure with a lower predictive cost. could fall short of some of the drexlerian possibilities that assume diamond, but that's not necessarily a big deal vs the functional diversity and cheap synthesis that's possible. also while I'm not sure if biosynthetic diamond has been 12:01 < docl> done yet, it doesn't sound totally implausible to me 12:01 -!- mirage335155998 [~mirage335@2a01:4f8:120:2361::1] has quit [Quit: Client closed] 12:01 -!- mirage335155998 [~mirage335@2a01:4f8:120:2361::1] has joined #hplusroadmap 12:14 < muurkha> kanzure: does he answer if you ask him? hard vacuum does seem to be the environment where the most progress has been made, things like EUV lithography, FIB milling, and often even SPMs 12:15 < kanzure> he's busy arguing with maaku 12:15 < kanzure> i wouldn't want to interrupt whatever's going on there 12:15 < muurkha> like, you can totally use an STM at atmospheric pressure, yeah, but almost only on gold and platinum, since almost any other metal oxidizes right away in air 12:16 < kanzure> no no i just mean that molecular protein machines seem to already exist and that direction might be more fruitful than trying to do diamonds 12:17 < docl> or might be a good way to reach the diamonds 12:17 < muurkha> yeah, quite plausibly 12:24 < muurkha> I mean until we have a working solution for the problem, we don't know which route to a solution will turn out to have been easier 12:35 < docl> subtractive mfg of diamond might be easier. feed the protein based mechanism some cheap diamond dust and get specific diamond components out 12:52 < docl> getting diamond stuff to join together sounds trickier since the hydrogen layer has to be removed while putting the carbons in close enough proximity to bond. maybe there's a way to start at the middle of 2 diamond surfaces and move the water/mechanisms outward after yanking the hydrogen. or maybe you could go non-aqueous with e.g. a perfluorocarbon fluid 13:08 < muurkha> hmm, how do you persuade a carbon atom to part company with its diamond lattice? getting the hydrogen or whatever off the surface so you can access the carbon doesn't sound too hard, but then how do you remove the carbon atom? 13:17 < docl> hmm. needs a stronger or at least equivalent bond to latch onto. maybe shove two oxygens in there? 13:19 -!- Netsplit *.net <-> *.split quits: catalase 13:21 -!- Netsplit over, joins: catalase 13:23 < docl> you could try to ionize it with UV, but I'm not sure that's reasonable to construct a biomimetic device for. electrolysis maybe? 13:40 < docl> .t https://www.nature.com/articles/ncomms4341 13:40 < saxo> Two-photon polarization-selective etching of emergent nano-structures on diamond surfaces | Nature Communications 13:43 < muurkha> two oxygens might work if you're removing a carbon atom that had two dangling hydrogen-terminated bonds, but that limits you to removing atoms at vertices and along crystal edges, doesn't it? maybe a helpful screw dislocation would allow you access to the whole volume that way 13:44 < muurkha> the problem with UV is that a carbon atom is about 0.1 nm and UV is more like 100 nm, so any particular minimum or maximum of a UV electrical or magnetic field will have at least tens of millions of carbon atoms in it 13:48 < docl> ah, good point 13:49 < docl> maybe a combination of UV + oxygen at the target site then? 13:49 < muurkha> conceivably. or fluorine or something? 13:51 < muurkha> like if you could functionalize the surface with fluorine at specific atomic sites, maybe you could illuminate it with UV at a frequency that excites the fluorine, or the carbon-fluorine bond, and has some significant probability of pulling the carbon atom free despite its three bonds to other carbons 13:52 < docl> yeah, fluorine is a good thought 13:53 < muurkha> if the speed of sound in diamond is 12 km/s, a phase shift of 1 nm is about 80 femtoseconds 13:56 < muurkha> if you could whack the opposite side of the diamond in a large number of places with phases determined to within ±8 femtoseconds or so, maybe you could arrange for the longitudinal shock waves from the various impact points to constructively interfere at a location chosen with single-atom precision 13:56 < muurkha> to shake that one atom loose 13:57 < muurkha> but if you can shoot things at the diamond with that kind of precision, maybe you can just cut a particular carbon atom free by shooting a hot fluorine atom at it 13:57 < docl> sounds complicated to calculate/recalculate if you want much functional diversity. might be usable for mass producing specific parts though 13:58 < docl> yeah. I wonder what can be built along the lines of shooting hot ions/atoms in this kind of setup 13:59 < muurkha> better, you can mechanically position a frisket with a single-atom hole in it over the diamond surface, or selectively protect the diamond surface with a resist coating, and then blast a FIB at it to remove an expected quantity of 0.1 atoms from the hole 14:00 < muurkha> then remove the protection, measure the surface to see if you successfully removed an atom, and repeat 14:01 < docl> I wonder if micro/nano scale ion beam tech is feasible? could be, using electrostatic acceleration 14:13 < docl> alternately you might set up the surface of the liquid just right and blast with ions every so often 14:21 < muurkha> you mean, focus the ion beam to a precision of a single atom? 14:23 < docl> I wasn't thinking that specifically, was thinking of using a resist like you mentioned 15:05 -!- spaceangel [~spaceange@ip-94-113-214-149.bb.vodafone.cz] has quit [Remote host closed the connection] 16:03 < maaku> docl not me 16:03 < maaku> Well that seemed pointless 16:04 < maaku> I don’t think he actually has any interest in APM. Seems entirely focused on AGI because working directly on APM is too hard 16:04 < maaku> More magical AI thinking. 16:05 < docl> well I'm on the fence, but sure seems like clever solutions to APM are possible to me and we don't need virtual engineers for it 16:14 < docl> better tools could help, but that's normal software. cad/cam, molecular dynamics, fem/fea, databases sharing libraries of recipes. seems insane to not update in favor of near APM for the same reasons AI is progressing (more GPU power to throw at it) 16:23 < muurkha> if AGI succeeds soon it seems likely to be the route by which we reach APM. or something does 16:24 < muurkha> in terms of APM I don't think we're doing a good job at taking advantage of the CPU power we had 25 years ago 16:28 < docl> maybe it'll plateau in a mediocre form that can't engineer anything much in the real world 16:31 < muurkha> maybe, but at least it ought to be enormously better at heuristic search for problems like SAT and place-and-route 16:33 < docl> .wik Boolean_satisfiability_algorithm_heuristics 16:33 < saxo> "The Boolean satisfiability problem (frequently abbreviated SAT) can be stated formally as: / given a Boolean expression / / / / B / / / {\displaystyle B} / / with / / / / V / = / { / [...]" - https://en.wikipedia.org/wiki/Boolean_satisfiability_algorithm_heuristics 16:33 < muurkha> but the problem we have right now is more mundane. like, FreeCAD still crashes a lot, and doesn't reliably handle medium-complexity designs, and SolveSpace still can't fillet 16:34 < muurkha> and neither of them do the kind of hybrid atomic/continuum modeling you want for diamondoid APM CAD 16:35 < docl> right 16:36 < docl> the hackers are failing us 16:37 < docl> and that might relate to the fact that their work tends to go uncompensated 16:40 < docl> also that chemistry and software are disanalogous enough that it's rare to overlap the specialties enough 16:49 < juri_> muurkha: just popping my head in, while working on 3d printing stuffs. the cad side of the world has a lot to go, but we hackers are working on it. 16:49 < muurkha> juri_: I know! how's your interval arithmetic stuff going? 16:50 < muurkha> I'm still ignorant about Clifford algebras 16:50 < juri_> for EDA, kicad seems to be all of the rage.. and i'm trying to solve 6 axis 3d printing, which is taking a while (3 years in, at least 6 months to go. 16:50 < juri_> muurkha: well! i actually have two simple functions that test correct! 16:50 < muurkha> wonderful! 16:50 < muurkha> what do they do 16:50 < juri_> i'm deep in a cleanup cycle right now. 16:51 < muurkha> kicad seems to be doing okay but that's just PCB layout, not all of EDA 16:51 < juri_> just finds where an arbitrary line intersects the axises. but the important part is they calculate an error value, and the resulting point is always within that value of the 'mechanically calculated' result. 16:52 < muurkha> I mean I don't think anyone is ever going to switch from virtuoso or stratus to kicad. yosys maybe 16:52 < juri_> so my intersection function is doing the right thing, AND i've learned enough error tracking to calculate a proper error value for the dirt simplest of cases. 16:52 < docl> nice! 16:52 < muurkha> juri_: that's great progress! 16:53 < juri_> now its just raising the complexity. :) 16:53 < juri_> along the way, i did some 3 months of horrible things to my codebase.. so i'm re-writing those three months of work. 16:54 < juri_> already found four bugs that caused a loss of error. 16:54 * docl was being overdramatic and silly (mostly down on myself for not hacking more) 16:55 < juri_> docl: oddly, i'm as pro-hplus as the next person here, but i've been as much as told i don't belong here, for not being a chemist or a biologist or.. 16:55 < docl> oof 16:55 < docl> well we need hackers 16:58 < juri_> so please, keep being frustrated with the state of cad tools. makes me feel more needed. :) 16:59 < docl> will do :) 17:02 < muurkha> I think more generally I'm frustrated with the state of tools for solving complex problems with computers 17:02 < muurkha> I mean is Excel and Jupyter and R-Markdown really the best we can do? c'mon 17:14 < juri_> haskell is really nice for expressing algorithmo, instead of writing state machines. its a big move, in my world. 17:17 < muurkha> yeah, I like FP 17:17 < muurkha> I'd probaby like Haskell if I learned it 17:59 -!- Hooloovoo is now known as Hoolooboo 18:10 -!- dustinm [~dustinm@static.38.6.217.95.clients.your-server.de] has quit [K-Lined] 18:10 -!- faceface [~faceface@user/faceface] has quit [K-Lined] 18:10 -!- archels_ [~neuralnet@static.65.156.69.159.clients.your-server.de] has quit [K-Lined] 18:12 < maaku> muurkha: I remain unconvinced that you can solve engineering problems entirely by just sitting around and thinking and not experimenting at all 18:12 < maaku> yet that is essentially what "AGI will solve APM" would imply 18:29 -!- dustinm [~dustinm@static.38.6.217.95.clients.your-server.de] has joined #hplusroadmap 19:18 < muurkha> maaku: you definitely cannot, but you seem to be implying that both AGI and people who are interacting with AGI will be limited to sitting around and thinking and not experimenting at all. or did you have a less implausible hidden premise? 19:20 < maaku> no that's not what I'm getting at. i was talking about the AGI /virtual engineer, not the AGI developer 19:20 < maaku> I was criticizing the idea of a FOOM 19:21 < muurkha> well, I can't assess AGI foomability until we have AGI to observe 19:21 < maaku> that an AGI could just "figure out" nanotechnology by running simulations on a supercomputer for 10 minutes or something 19:21 < muurkha> but even without foomability I think you can make a lot of progress in empirical sciences faster by using your experimental resources more intelligently 19:21 < maaku> well I feel confident speaking about this, because the underlying principles are understood 19:22 < muurkha> if the underlying principles were understood we'd already have AGI :) 19:22 < maaku> no you're misunderstanding. i mean the underlying principles of science and engineering 19:23 < muurkha> oh, well, that may be, although nobody seems to have a good account of where hypotheses come from 19:23 < maaku> our models tend to not reflect reality for a ton of reasons. being able to think more quickly just makes you wrong faster. you need an experimental feedback loop 19:23 < muurkha> yes, you definitely do 19:24 < muurkha> being wrong faster is very useful when you have that feedback loop 19:24 < maaku> yes, agreed 19:25 < muurkha> by "the underlying principles of science and engineering" do you mean the social processes by which those disciplines operate, the epistemological reason they work, or the underlying laws of the physical universe? 19:25 < maaku> we can get that fast feedback loop if we do something like GitHub Copilot for engineering design, which is doable with existing technology--no AGI required 19:25 < maaku> making it agenty doesn't actually make anything much faster 19:25 < muurkha> it might, we'll have to see 19:26 < muurkha> you can certainly imagine an AGI controlling a FIB milling machine that plans and executes 1000 experiments per second, but not if it's something like GitHub Copilot 19:27 < muurkha> but there might be practical reasons related to computational resources that this doesn't happen, or not until after it would be interesting 19:27 < maaku> why not? I could have a parallel mill setup and put 1000 copilot designs on it just as easily as an agenty AI 19:28 < muurkha> well, you'd need 1000 people to do that, and each experiment wouldn't be informed by the previous one 19:28 < maaku> muurkha: but I just mean something more mundane, along the lines of "emergent complexity" [sic] 19:28 < maaku> you can understand basic newtons laws, but that doesn't make things like turbulence obvious 19:28 < muurkha> by "the underlying principles of science and engineering" you mean something along the lines of "emergent complexity"? 19:29 < maaku> even though it technically derives from newton's laws 19:29 < maaku> and simulations tend to NOT show complexities which actually show up in the experiment 19:29 < maaku> plus a lot of numerical failure modes causing simulations to not represent reality 19:30 < muurkha> you can get turbulence in CFD simulations though 19:30 < muurkha> probably it wouldn't have taken 200 years from Newton to Reynolds if Newton had had CFD 19:30 < muurkha> or, say, Bernoulli 19:31 < maaku> ab-initio CFD turbulence doesn't reflect reality 19:31 < maaku> our simulations are actually using experimentally determined models rather than ab-initio for these sorts of things 19:32 < muurkha> it can reproduce some of its features, and if you can compare it against real experiments you can tune it to reproduce them better 19:32 < muurkha> yes, of course 19:32 < maaku> and/or correction factors 19:32 < maaku> so yeah, if you want an AGI to make a virtual engineer... it will end up looking a lot like actual engineering 19:32 < maaku> lots of trial and error, validating models against reality, prototyping, etc. 19:33 < muurkha> yes, for sure 19:33 < maaku> it is NOT a magic box you put in a request ("make me a nanofactory") and get a blueprint out of that works 19:33 < muurkha> but think about how much faster that would be if the engineers were a lot smarter and could build much-better-planned prototypes in orders of magnitude less time 19:34 < maaku> which is what the MIRI people believe, and sadly also metzger I think 19:34 < maaku> I don't think it would be orders of magnitude faster 19:34 < muurkha> it might not be 19:34 < maaku> We're talking constant factors, and with existing technology large capital costs for gpus 19:35 < muurkha> but it seems unlikely (as the MIRI people point out) that human intelligence is some sort of natural limit to intelligence, or close to it 19:35 < maaku> 10x maybe. 100x? you'd better have some explanation for why, because it isn't obvious to me 19:36 < maaku> muurkha: I disagree with that, but we're getting into off topic philosophy with that 19:36 < maaku> (limits of intelligence I mean) 19:36 < muurkha> well, I think it's extremely relevant 19:37 < muurkha> the standard MIRI argument, which seems pretty watertight to me, is that if human-level intelligence can produce an AGI with human-level intelligence, then an AGI can produce an AGI 19:38 < maaku> the base error here is grading intelligence on a scale. 19:38 < muurkha> and because engineering designs generally improve incrementally over time, that AGI can, with enough work, produce a slightly beter AGI 19:38 < muurkha> *better 19:38 < maaku> is a supercomputer more turing complete than a raspberry pi? 19:38 < muurkha> I don't think that necessarily implies foomability, though they do 19:39 < muurkha> but it does imply that AGI will, almost by definition, be capable of superhuman performance shortly after it is capable of human performance 19:39 < muurkha> yes, a supercomputer is a closer approximation to Turing-completeness than a Raspberry Pi 19:40 < muurkha> it also implies that that rate will tend to increase, at least if there's enough of an inflow of money 19:40 < muurkha> and if PRC doesn't nuke TSMC 18 months from now 19:42 < muurkha> I think it's plausible that, even barring any kind of collapse, the improvement will be slow, and that it will be a while before Microsoft is willing to devote their giant GPT-42 cluster to the APM problem 19:48 < muurkha> but it does seem likely that after a short period of the centaur-chess-style Copilot situation, where humans are complementary to automatic computation (01998 to 02008 in the case of centaur chess), we'll see a period where that stops being interesting 19:48 < maaku> What I mean though is that there is not anything qualitatively that a supercomputer can do which a rasbpi cannot. It is more performant in solving larger problem classes, but there aren't things it fundamentally can do which smaller computers can't. 19:49 < muurkha> Yes, that's true, if you hook them both up to the same SAN 19:50 < maaku> Likewise with intelligence--there are not problems which we are fundamentally incapable of solving (assuming paper and pen if nothing else for overflowing working memory) 19:50 < maaku> The only difference is speed. 19:50 < muurkha> and because a human with a large sheet of paper can do the same thing, certainly they could simulate AlphaZero with pencil and paper in order to play better chess 19:51 < muurkha> but that's a very significant difference in speed indeed 19:52 < maaku> Narrow AI is like better cognitive artifacts like pencil and paper. We can get very far with better cognitive tools. 19:52 < muurkha> yes 19:53 < maaku> But the argument that general/agenty AI will allow us to solve fundamentally harder problem classes, or otherwise give us capabilities presently unreachable is vacuous 19:54 < maaku> I'm not saying that we can make nanotechnology with no tools other than pencil and paper. We will use the computational resources available to us. 19:54 < muurkha> I don't think it's vacuous; AlphaZero already gives us capabilities that were previously unreachable even without being general 19:54 < maaku> And some of those resources are emerging narrow AI things like copilot-for-engineering-design 19:54 < maaku> But I reject *AGI* being on the critical path here. 19:55 < maaku> Copilot, and also alphafold for inverse folding. 19:56 < muurkha> oh, well, I agree that we might solve APM before AGI, and surely computational resources will be a key part of doing it if so 19:57 < muurkha> but if we reach AGI first, it seems almost inevitable that autonomous AGI will be better suited to solving APM than humans employing narrow AIs are 19:57 < muurkha> if not at first, soon afterwards 19:59 < maaku> Specify what you mean by "soon" please 20:00 < maaku> Developing highly advanced AI will make APM happen faster. It certainly won't make it happen slower. 20:00 < maaku> But by "faster" do you mean it will happen in days or weeks (FOOM), or that it will just accelerate us by a small constant factor and shave some months off the timeline? 20:01 < maaku> I once believed the former. I'm now convinced of the latter. 20:01 < muurkha> well, I don't really know, because I don't understand consciousness or intelligence 20:02 < maaku> But while it would be useful to shave months off the timeline, there are non-AI things that could be done now which would shave years or decades off the timeline 20:02 < muurkha> maybe. like actually working on the problem 20:02 < maaku> exactly 20:03 < maaku> Sitting around waiting for AI overlords to grant us eternal youth through medical nanotechnology will take X years. Actually doing the hard work of making it ourself will take Y years. 20:04 < maaku> I firmly believe that Y << X, so I work on APM directly, not AI. 20:05 < maaku> (Also if you do worry over AI x-risk, which I largely don't, then I would argue reducing the hardware overhang by making APM happen sooner would be in your advantage too.) 20:05 < muurkha> what AI overlords do or don't do seems almost entirely unpredictable to me 20:06 < maaku> Why do I belive Y << X? Because AI solving APM and nano applications is exactly the same problem + all the problems of AGI. 20:06 < maaku> And the constant factor speedups of having virtual engineers are small enough as to not make up the difference. 20:06 < muurkha> I mean it depends on who develops them and, even in foomless scenarios, on tiny details of the situation 20:07 < muurkha> I don't think we can confidently assess those speedups without having ever observed AGI in the wild 20:07 < maaku> Since actually making nanotech is a really hard engineering problem that requires an experimental feedback loop. 20:07 < muurkha> agreed 20:09 < muurkha> what makes the experimental feedback loop slow? 20:11 < muurkha> my list is "reactions whose equilibrium is shifted unfavorably by high temperatures", "moving large objects", and "people" 20:12 < muurkha> I feel like it's plausible that you could do APM without any of them. so how fast could you get the feedback loop of designing an experiment, running it, and revising your theories in response? 20:14 < maaku> Yeah exactly, actually building things and gathering data is slow. Well, not slow per se, but human-speed 20:15 < maaku> How much faster could an AGI be vs. a lab full of engineers working 3 shifts? 20:15 < maaku> 5x faster? 10x faster? Maybe. Not 1,000,000x faster. 20:16 < maaku> This isn't an intelligence-limited task. 20:16 < maaku> It's an actually-roll-up-your-sleeves-ang-get-to-work limited task 20:17 < maaku> (Which for human enterprises actually means a funding-limited task in need of financial engineering. But now we're wanding into my startup and away from the core tech problem.) 20:21 < muurkha> why couldn't it be 1,000,000× faster? it's true that you can't move one-kilogram objects around at 1,000,000× times the speed that human engineers can 20:22 < muurkha> but we've been turning vacuum tubes on and off, moving clouds of electrons, 1,000,000× faster than engineers can do it by hand since the 01920s 20:23 < muurkha> you can totally move around atoms and molecules 1,000,000× faster than engineers can do it by hand. if moving around atoms and molecules, rather than thinking about them, is the bottleneck, you can totally get a speedup of 1,000,000× 20:23 < muurkha> but only if you can think faster 20:24 < muurkha> (I mean, technically it's physically possible to move one-kilogram objects around at 1,000,000× that speed, but doing so will kill anyone anywhere nearby) 20:31 < muurkha> tapping-mode AFM cantilevers do routinely tap on solid surfaces about a million times a second, while I can't really tap on my keyboard usefully more than about ten times a second. but an AFM typically just taps on the same million or so points on the surface in the same way, second after second, because it's being controlled by a human with biology like mine, who can't think about what to do after 20:31 < muurkha> each of those million taps 20:43 < maaku> you're comparing apples to oranges, no? 20:44 < muurkha> I don't think so. like, literally, when people do experiments, this is what they do, right? 20:44 < maaku> in a direct-to-APM scenario the human engineers would still be programming machines to conduct experiments faster 20:44 < maaku> *faster than could be done by hand 20:44 < muurkha> they think about what they're going to do, and then they move some objects around in space and mix some substances, and then they think about the results 20:44 < muurkha> and then they repeat 20:45 < muurkha> if the objects are large, then moving them around in space can take a long time, longer than thinking about it 20:45 < muurkha> if they're small then it doesn't 20:45 < maaku> ok first of all an AI is not going to be able to think much faster / lower latency than a human. these a going to be very large inference models 20:45 < maaku> it's just capable of integrating more information at once 20:45 < maaku> but that's not really relevant to conducting these sorts of experiments, just the analysis afterwards 20:45 < muurkha> well, certainly *if* an AGI can't think faster than a human, *then* it won't speed up the feedback loop 20:45 -!- mrdata_ [~mrdata@135-23-182-185.cpe.pppoe.ca] has joined #hplusroadmap 20:46 < muurkha> but what if it does? what if it can do the analysis afterwards in a microsecond rather than a month? 20:46 -!- mrdata_ [~mrdata@135-23-182-185.cpe.pppoe.ca] has quit [Changing host] 20:46 -!- mrdata_ [~mrdata@user/mrdata] has joined #hplusroadmap 20:46 < maaku> but if you look at a lot of industrial proceses, like the manhatten project or the construction of the first haber-bosch industrial plant, it was bottlenecked by physical processes not thought 20:46 < muurkha> yeah, but have you ever visited a haber-bosch plant? 20:47 < muurkha> those are not one-kilogram objects 20:47 < muurkha> those are one-thousand-tonne objects 20:47 < muurkha> moving them around takes a long time because they are large and heavy 20:47 < muurkha> similarly much of the extensive machinery in the Manhattan project 20:47 < maaku> sure but even during the design phase when they were trying 10,000 different catalysts to find the right one that worked, they did it by having 100 mini haber-bosch plants in parallel and cycling through test runs 20:48 < maaku> literally months of experimental time 20:48 < muurkha> they actually found lots and lots of catalysts that worked, FWIW 20:49 < muurkha> but this is in the first category of my list, "reactions whose equilibrium is shifted unfavorably by high temperatures" 20:49 -!- mrdata [~mrdata@user/mrdata] has quit [Ping timeout: 272 seconds] 20:50 -!- mrdata_ is now known as mrdata 20:50 < muurkha> they were stuck with it because they already knew the reaction they were after, they were just trying to figure out how to speed it up and scale it up 20:51 < maaku> So moving to APM, I suspect a lot of the early work will be done with alternating vaccum and CVD vapor baths over a worksite, which takes physical time to cycle 20:52 < muurkha> running 100 experiments in parallel is enormously less informative than running 100 experiments one after the other, because in the first case your 100th experiment was designed before you knew anything, and in the second case it was designed with the knowledge of the previous 99, 98 of which were also more informative than in the parallel case for the same reason 20:52 < maaku> or for solution-phase chemistry, there's a fair amount of stuff that takes time as well 20:52 < muurkha> yes, in general you can speed up reaction rates by raising the temperature 20:52 < muurkha> but as I said in some cases that shifts the reaction equilibrium unfavorably, so it doesn't help 20:53 < muurkha> what limits the time required to shift between vacuum and CVD vapor baths? thermal mass, no? 20:54 < muurkha> so if your vacuum chamber is a cubic millimeter you can do it a lot faster than if it's a liter 20:54 < maaku> yeah well I think we're splitting hairs here. you asked why we ought not expect a 1,000,000x speedup. that would be taking something that would normally take 20 years to do and do it in 10 minutes 20:55 < maaku> and my answer is not even the smartest AI could do a 20 year engineering project in 10 minutes. 20:55 < muurkha> yes, that's right. I think that's a totally plausible thing to do for projects that don't have to involve humans (presupposing AGI is a million times faster, which is unlikely), don't require reactions whose equilibrium is shifted unfavorably by high temperatures, and don't require large objects 20:56 < muurkha> and I think I've demonstrated that satisfactorily in your chosen examples 20:57 < muurkha> I don't think that's splitting hairs at all; I think it clearly shows that your claim that AGI won't speed up APM much is dependent on your (plausible) claim that AGI won't think much faster for a long time 21:00 < maaku> well it might very well think faster, but thinking faster has exponentially decreasing utility 21:01 < maaku> maybe not exponential, I'd have to think on what the utility function is here 21:01 < maaku> but it's definately not a linear relationship. not even close 21:02 < muurkha> maybe there are other engineering projects that are inherently slow for totally different reasons that neither of us have thought of, but I think it's telling that the examples you brought up were the Manhattan Project (with its 1500 km² Hanford Site) and the Haber–Bosch process, which are sort of the epitome of moving large objects around 21:02 < muurkha> and the second of which is slow for the reaction-equilibrium reason 21:03 < muurkha> I don't know anything about utility; that's getting off into ethical territory that is much harder to find agreement in 21:04 < muurkha> but I think that if you can think faster and also do experiments faster, you can produce the same work in a linearly shorter time period 21:04 < muurkha> it's true that if your thinking is slow you don't get much benefit from doing experiments faster, and vice versa 21:06 < maaku> i meant economic utility. the marginal value of the extra thought that goes into something 21:06 < maaku> I expect the payoff for thinking about stuff to follow essentially a boltzmann distribution 21:07 < muurkha> it depends a great deal on the situation. if you're in a gunfight, the marginal value of thinking 5% faster than your opponent is that you live and he dies 21:07 < maaku> it pays *a lot* to think more about something up to a point, then it starts paying decreasing dividends 21:08 < maaku> my intuition, which could be wrong, is that humans are not at the top of that distribution, but close to it. augmented humans with better computational tools (including machine learning) are pretty near the apex 21:13 < maaku> muurkha: I would think that in a gunfight situation the marginal value of thinking more is negative... 21:14 < maaku> "shut up and shoot" 22:02 < fenn> .title https://newatlas.com/telecommunications/optical-chip-fastest-data-transmission-record-entire-internet-traffic/ 22:02 < saxo> Record-breaking chip can transmit entire internet's traffic per second 22:02 < fenn> .title https://www.nature.com/articles/s41566-022-01082-z 22:02 < saxo> Petabit-per-second data transmission using a chip-scale microcomb ring resonator source | Nature Photonics 22:14 -!- alexbfi [~alexbfi@193.209.237.36] has joined #hplusroadmap 22:37 < fenn> juri_: re "i've been as much as told i don't belong here, for not being a chemist or..." i'm pretty sure you misunderstood whatever it was. the official hplusroadmap party line is that it's good to have real world applicable skills, and doing something to advance the cause is more important than cheerleading. you're winning on both counts as far as i see it --- Log closed Tue Oct 25 00:00:58 2022