--- Log opened Sat Mar 04 00:00:00 2023 00:00 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Remote host closed the connection] 00:01 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 00:26 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 00:27 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 248 seconds] 02:52 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 03:05 -!- MARVEST [~Malvolio@idlerpg/player/Malvolio] has quit [Ping timeout: 248 seconds] 03:33 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 04:23 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 04:28 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 04:28 < hprmbridge> harmoniq.punk> And they are. They have ZERO incentives to fund simple and cheap concepts. They will fund only expensive and complex engineering ones. It is very important that we build simple and cheap nanotech manufacture. No more excuses. Using light is not actually a very good idea. EUV are 1950s ideas presented by Feynman. https://www.youtube.com/watch?v=4eRCygdW--c&t=3122s We can achieve in many other 04:28 < hprmbridge> harmoniq.punk> ways than just using expensive lenses with crazy err correction to manipulate electromagnetic fields. Manipulating em fields can be done in other simple ways. The problem we have today is that people who understand these things and fight for these are massively underfunded. If we want to control the research in the direction we want we need to control the money. So tell me how do we put serious 04:28 < hprmbridge> harmoniq.punk> capital control into people hands that understand technology. Yes, Musk is one of those who deserve to control capital, that's why have so much success. But we need a more diverse ecosystem with smart people who control capital flow. Musk is not even that smart, but he looks so smart because the rest of them are really dumb. 04:29 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 248 seconds] 05:23 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 06:20 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 06:40 -!- Jay_Dugger [~jwd@47-185-229-91.dlls.tx.frontiernet.net] has joined #hplusroadmap 06:40 < Jay_Dugger> Hello, everyone. 06:46 < muurkha> harmoniq.punk: cryptocurrencies maybe? 06:48 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:c9b1:cf4f:f7cf:667d] has joined #hplusroadmap 07:02 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 07:26 < lsneff> .t https://www.micro-epsilon.com/displacement-position-sensors/interferometer/ 07:26 < lsneff> .title 08:07 -!- flooded is now known as _flood 08:40 <+gnusha> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=defd090b Bryan Bishop: ok, maybe lots of computation is fine >> http://diyhpl.us/diyhpluswiki/projects/human-like-cognitive-abilities/ 08:45 < kanzure> sorta called that one wrong back in 2014 ("Must not require impossibly huge amounts of computational resources. Implementations must run on commodity computing hardware.") 09:08 < hprmbridge> kanzure> https://cdn.discordapp.com/attachments/1064664282450628710/1081624138445226104/https3A2F2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.png 09:12 <+gnusha> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=5dfd4f04 Bryan Bishop: fix more >> http://diyhpl.us/diyhpluswiki/transcripts/2023-03-03-twitter-bayeslord-pmarca/ 11:04 < L29Ah> https://storage.gra.cloud.ovh.net/v1/AUTH_011f6e315d3744d498d93f6fa0d9b5ee/qotoorg/cache/media_attachments/files/109/824/305/058/018/248/original/ff2f4aac338f72a1.png 11:09 < nsh> lots of computation is fine but clearly not necessary 11:09 < nsh> given the brains exist at a pretty low wattage 11:13 < kanzure> could you explain that? why would a low wattage indicate low computation in brain matter? 11:14 < kanzure> also why would wattage be the way to judge computation in the brain instead of something more obscure like number of molecular collisions per second? 11:28 < nsh> what i mean is that we're evidently still "doing it wrong" wrt to the efficiency of biologically-mediated consciousness 11:28 < nsh> watts is just a placeholder for the general inefficiency 11:29 < nsh> (and because there is a principle (assumption) due to Landauer that a minimum quantum of energy is required to change a state in an irreversible computation) 11:30 < nsh> however there's a lot of *difference in kind* between the two ways of doing things 11:30 < nsh> computers, as i've said before, are incredibly stupid. large amounts of energy are expended working against entropy 11:30 < nsh> when entropy can actually be useful, and facilitate computation 11:30 < nsh> you just have to loosen the stubbornly tight grip we have on boolean formalistic logic 11:31 < nsh> there's a book by someone who has shown some promise in this regards you might want to read 11:32 < nsh> https://www.amazon.co.uk/Primacy-Doubt-climate-uncertainty-understand/dp/0192843591/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr= 11:33 < nsh> .t https://www.youtube.com/watch?v=w-IHJbzRVVU 11:34 < nsh> Tim Palmer is someone i feel i could have decent conversation with that might move things in the right direction 11:34 < nsh> and that is not an honour i confer very often 11:38 < kanzure> all i wanted was the dimensional unit or measurement 11:42 < nsh> and if all you wanted were all you needed life would be simpler indeed 12:03 < docl> hmm. what's a viable plan to fund simple+cheap compute production out of cryptocurrency 12:09 < L29Ah> you buy all the PoW crypto and the market reacts by optimizing PoW mining, including computing hardware production! 12:10 < docl> yeah but how does it get directed to the underrated cheap speculative ideas instead of big silicon's existing asic factories 12:13 < kanzure> docl: to my knowledge crypto mining didn't drive any particular innovation in compute manufacturing 12:14 < kanzure> taek42 would know more but i think the fabs started to limit the production of crypto mining chips to save room for their other customers (and/or they had other long-term capacity commitments they had to honor) 12:15 < kanzure> just doing the math it would seem that synapses are a million times cheaper to grow than it is to build the tens of thousands of transistors to model a synapse (not that modeling a synapse is absolutely necessary, of course) 12:16 < fenn> re the human-like cognitive ability software page, "Must have very good reasons for using any previous approach" - didn't that also happen with neural networks? the only thing that really changed was that computers got bigger. so massively more computing is a "good reason" to re-investigate old failed approaches 12:17 < fenn> there should be a middle ground between simulating every possible aspect of a synapse, and pretending a ReLU is a synapse. more importantly, the overall brain architecture should resemble a human, for comprehensibility, regardless of the low level specifics 12:18 < fenn> i would guess that a neural network that's connected together the way a human brain is, would think and learn in a similar way to how a human thinks and learns 12:19 < kanzure> human brain isn't particularly comprehensible tho? 12:19 < fenn> but the behavior might be more comprehensible 12:19 < kanzure> unfortunately i'm not current on machine learning techniques to say how much special sauce has been added to multi-layeral neural networks (gradient descent isn't new either) 12:20 < fenn> well "deep learning" started around 2012 or so, and that was just old fashioned neural networks but with more layers 12:20 < kanzure> actually the thing that changed the most was nvidia APIs and CUDA i think-- you could always wire up a bunch of GPUs if you had to 12:20 < L29Ah> docl: via price signals: more venture investors become interested in fishy startups when the expected profit is higher 12:21 < kanzure> tensorflow, keras and pytorch were helpful, but mostly CUDA 12:21 < fenn> why did i say 2012. i meant 2007(?) with geoff hinton using ANNs for document classification/clustering 12:22 < L29Ah> otoh plenty of money will trickle to existing electronics manufacturing and utility companies, so it's not the best method of cours 12:23 < L29Ah> some foundations elect to have competitions of various sorts, like "make your lab mouse live 10% longer than the previous maximum mouse lifespan and you get $1M", the same can be used for electronics 12:23 < kanzure> wonder why there hasn't been more work on human brain simulation (not emulation), the low number of inter-regional axons should be useful for making simpler computational modles 12:23 < kanzure> models 12:24 < kanzure> anyway, i do think silicon fabs are going to be a huge bottleneck going forward 12:24 < kanzure> poeple handwave and say oh we'll just move them to the US and build a few $10 billion fabs as if that's a real thing that happens 12:24 < kanzure> or as if the industry isn't made up of an array of enormous gatekeeping conglomerates that don't necessarily have an interest in a thousand flowers blooming 12:25 < kanzure> (fabs do get made but it's slow and i think most of their capacity is spoken for before building anyway?) 12:27 < kanzure> demand for really good AI compute will suck up most of the available hardware and compute capacity, and price out anyone from using the better models. smaller models will be cheap but comparatively useless and you're not getting modern hardware anyway due to forthcoming supply scaling problems. 12:28 < fenn> intel is building a "largest on the planet" fab in ohio 12:29 < fenn> i read some prognostication on the near future stratification of companies into those providing foundation models and those making tweaks for specific applications, and how the latter is going to get totally screwed by the former 12:30 < kanzure> TSMC for latest node processes is doing like 5,000 wafers/day which is pretty cool, altho that's probably not a single fab line 12:31 < fenn> i wonder if tesla is going to utilize their vast fleet of sleeping car AI hardware for training foundation models 12:31 < kanzure> i don't think that we're going to be able to scale up silicon fab production- we can add fabs but it's going to be <50 fabs/year or something 12:32 < fenn> on HN they said it took 2048 V100 GPUs 23 days to train LLaMa, which works out to about $5 million per training, or $30 million to buy the cards outright 12:32 < fenn> the thing is, ~2000 GPUs isn't a whole lot in the world market 12:33 < kanzure> the nvidia A80 GPUs are like $15,000/ea, i don't know what the total production quantity was for these things. 12:33 < fenn> so are you predicting that hardware requirements are actually greater than this, or that future models will be even bigger and require more training, or what? 12:34 < kanzure> well, our current strategy seems to be throwing more compute at things 12:35 < fenn> sorry, A100 not V100 12:35 < kanzure> and it seems to work.. so we will keep doing this. 12:37 < kanzure> in the short term, since training is more expensive, there probably won't be as much of a supply crunch i suppose, since you can just train and be done and then use a handful of GPUs for your queries (and then you sell access to keep utilization high) 12:38 < fenn> huh steve mann was one of the first people doing GPGPU stuff 12:38 < kanzure> i don't think all future AI models are going to be $x million of training followed by 8x A100 GPUs for regular use. i could see some taking thousands of GPUs to run queries and do useful human-level work. 12:41 < fenn> i misremembered, the prediction was that foundation models would not actually be available to the companies using them. i got confused because of what people are acutally doing with stable diffusion, where they re-train the model to suit their general needs, or trade embeddings for specific tasks 12:41 < kanzure> i don't see how that was misremembered by your previous text 12:42 < fenn> it's really going gangbusters right now, civitai.com has a lot of really impressive work available for free, mostly for making porn, but it's sort of amazing this kind of globbing together random stuff actually works in practice 12:43 < fenn> this is the problem with reading things on twitter; there's absolutely no way to find them again 12:45 < kanzure> with biological neurons we have some training problems but they are many orders of magnitude cheaper.. and don't have silicon fab scaling problems. 12:45 < kanzure> and biological reproduction is decentralized (for now) (don't tell anyone!) 12:46 < fenn> time for another Moratorium For Your Safety (tm) 12:46 < L29Ah> > the nvidia A80 GPUs are like $15,000/ea 12:46 < L29Ah> consider not supporting the monopolist 12:46 < fenn> uh if you buy AMD are you not supporting the monopolist? 12:46 < L29Ah> i'm pretty sure the required GPUs become multiple times cheaper when you aren't tied to a particular vendor-locking language 12:46 < kanzure> it's sort of surprising that compute hasn't been bottlenecked by the regulators ("too much computational freedom is dangerous") 12:46 < fenn> if there's a competitor is it really a monopolist? 12:47 < L29Ah> nVidia and AMD isn't a monopoly 12:47 < kanzure> there's only so much fab capacity available 12:48 < fenn> kanzure we all got along fine despite the 'chip shortage' 12:48 * L29Ah descends back into the abyss mumbling about accelerationists 12:48 < kanzure> didn't the price of new cars completely skyrocket because of chips? i suppose they might have blamed chips more than the other industrial material problems in the supply chain. 12:48 < fenn> that's obviously a bullshit reason. cars never needed chips before and they don't now either 12:49 < kanzure> they were probably designed to have chips and weren't designed not to have them 12:50 < fenn> how is crypto nonsense affecting the GPU market these days? 12:50 < kanzure> it's been significantly reduced because ethereum (and copycats) have moved away from GPU mining 12:51 < kanzure> as a result there is less upward pressure on the GPU market, although AI applications are now replacing some of that pressure 12:52 < kanzure> this has been your free GPU weather forecast.. back to you sydney. 12:53 < fenn> i am a good bing 12:53 < kanzure> what i would like is a future where anyone can get a pretty advanced AI- where there is enough compute capacity for everyone to have at least one personal assistant with human-like cognitive abilities 12:53 < kanzure> and making all the intelligence very decentralized would be good too 12:54 < docl> how do you stop big corps from centralizing? regulation? 12:54 < fenn> there's two kinds of assistants, the at your side servant that answers queries and executes tasks immediately, and a more long term background sort of awareness that is monitoring trends and stuff with your interests in mind 12:55 < fenn> the immediate kind seems like it will not be in use very often 12:55 < kanzure> docl: dunno if you can stop that but don't think you have to in order to get decentralized human-level cognitively ability either 12:56 < kanzure> fenn: the now-defunct FTX Future Fund defined AGI as basically something that could do most of the tasks that a $20/hour human knowledge worker could perform, for about the same cost 12:58 < kanzure> instant query response is nice but i would also like to send something off to read about modern fabrication techniques for ultra-high vacuum systems without having to spoonfeed it other than giving it access to a web browser/some VM 12:58 < fenn> sure, but an arbitrary "knowledge worker" is going to be running 24/7, and scales up a lot. there's not a good reason why you'd end up in a stable economic situation where each human has access to 1.0 human equivalents worth of surplus compute, instead of some fraction of that, or large multiple of that 12:58 < kanzure> scales up a lot? 12:58 < fenn> companies will buy millions of them 12:58 < kanzure> oh is that where scaling comes from 12:59 < fenn> if you get value from having 1 human equivalent working for you, it's pretty likely you'll get value from 2, and so on 12:59 < fenn> is there a standard unit for this yet? i don't feel right saying 100 IQ = 1 human 12:59 < kanzure> anyway yes you're right that given the current economics it is unlikely that your average person is going to be able to afford an average human-level AGI 13:00 < kanzure> and yet people have kids anyway, and pets 13:00 < kanzure> and almost everyone has a smartphone 13:00 < kanzure> or some kind of computer in their pocket at least.. 13:01 < fenn> and sub-human-equivalent AI girlfriends/boyfriends are a thing for several years now 13:01 < fenn> i only really read about that for the first time yesterday 13:01 < kanzure> oh the replika scandal 13:01 < fenn> yes 13:02 < fenn> an emotionally perceptive and emotionally persuasive artificial agent will be way more impactful on the whole AI rights debate than any amount of intelligence 13:02 < fenn> re "and yet people have kids anyway, and pets" 13:02 < fenn> it's going to be hard to figure out who is the pet for whom 13:04 < fenn> HN was also speculating about how training foundation models is like banking computation by memoization 13:04 < kanzure> somehow the production of humans isn't quite centralized at the moment- i'm not saying it ought to be regulated, it just looks like an anomaly 13:04 < fenn> if you can duplicate common computation that means you need less 13:05 < fenn> so you train a big model on a bazillion enormous data center GPUs, and then download it to your (for all intents and purposes zero economic cost) smartphone 13:06 < fenn> well, someone else trains it, not you 13:06 < superkuh> And not a smartphone because they don't have storage. 13:06 < fenn> the small LLaMa model is only 7GB 13:07 < superkuh> Could you fit 7GB on your smart phone? 13:07 < fenn> it doesn't have enough RAM to actually run it. i expect that in a few years hardware and algorithmic improvements will meet in the middle and you'll be able to run a language model directly on a phone in realtime 13:07 < superkuh> Anyway, sorry, just my smartphone hate bleeding through. Ignore me. 13:07 < fenn> i understand 100% 13:08 < fenn> i also hate how the smartphone software ecosystem is user-hostile, but the hardware is fantastic, and absurdly cheap to acquire 13:09 < kanzure> fenn: so you think the most likely scenario is more like cheap smaller sub-human models (narrow stuff) that is widely distributed and decentralized (except perhaps the training), and that a scenario where most people get access to human-level AI is unlikely? 13:09 < fenn> in the short term 13:09 < fenn> i mean that is already the situation 13:10 < kanzure> modulo human brains 13:10 < fenn> google used to upload audio recordings to "the cloud" to be processed on big GPU farms for speech to text. now they do it directly on the phone 13:11 < fenn> long before we get AI directly on the phone, it will be built into the OS as a cloud-linked service 13:13 < kanzure> you mentioned individuals getting access to a "large multiple" of human-level compute, what scenario is that 13:14 < fenn> the scenario where we don't blow up all the fabs in a stupid war 13:15 < hprmbridge> MatthewGoodman> Cars and silicon shortages are harder linked than normal because of regulatory requirements in the US 13:15 < fenn> renting GPU time for specific tasks like, "make a photorealistic picture of X" is actually really cheap, but people aren't accustomed to thinking about paying $0.005 for a product 13:16 < kanzure> how long would it take to produce enough GPUs for everyone to have at least one 24/7 human-level AI if we assume 50x A100 GPUs per human-level AI 13:16 < fenn> honestly i don't think the concept of "human level AI" will ever make sense 13:16 < juri_> more than you can afford to power. 13:16 < kanzure> fenn: it's basically a worker that i assign tasks to and i don't care if they are a dog on the internet 13:17 < fenn> ok let's look at some kurzweil extrapolation curves 13:17 < kanzure> as long as they can use a mouse and read and type and maybe even speak/watch/listen, and perform cognitive tasks like planning and interpretation and execution then that should be good 13:18 < kanzure> "on the internet nobody knows that you're a dog" etc 13:18 < kanzure> https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog 13:18 < kanzure> wait this doodle earned him $200,000? 13:21 < fenn> i keep hearing that moore's law is dead, but if anything computation seems to be getting cheaper even faster: https://qph.cf2.quoracdn.net/main-qimg-a35b20805bbb9c958875f5f524e5a4cd 13:22 < fenn> kanzure what can "everbody" afford? kurzweil picks an arbitrary value of $1000 worth of compute capacity and from that extrapolates to a date in the future for human-equivalent computation 13:24 < fenn> in the year 2000 or so he predicted we'd get there around 2025 (10^15 "calculations per second") 13:26 < juri_> hardware is pretty cheap. the software to organize that, however... if anything, its getting more expensive. 13:27 < fenn> oh nice, he made the graphs available to wikipedia https://en.wikipedia.org/wiki/The_Singularity_Is_Near 13:30 < fenn> "What technology will follow integrated circuits, to serve as the sixth paradigm, is unknown ... nanotubes and nanotube circuitry, molecular computing, self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA, spintronics (computing with the spin of electrons), computing with light, and quantum computing." 13:35 < fenn> i'm trying to estimate how long it will take for 50 A100 GPUs to cost $1000 13:36 < muurkha> I wonder if it was really him or Kathryn 13:36 < fenn> an A100 can do 78 TFLOPs at FP16, or 312 TFLOPs of "Dense Tensor" 13:44 < fenn> "the 120 years of moores law" graph was made by steve jurvetson, here's the updated version https://www.flickr.com/photos/jurvetson/51391518506/ 13:57 < kanzure> https://techcrunch.com/2022/01/11/fertilis-ivf-cell-culture-funding/ 14:12 < fenn> From the AI Index 2019: "Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months (for a net increase of ~10,000,000x since then)." 14:12 < fenn> graph showing how AI basically started in 2012 https://www.flickr.com/photos/jurvetson/49236276227/ 14:18 < hprmbridge> kanzure> https://www.numenta.com/blog/2018/08/29/thalamus-snubbed/ 14:18 < hprmbridge> kanzure> https://cdn.discordapp.com/attachments/1064664282450628710/1081702170866483200/fig1-540x400.png 14:19 < kanzure> "Thus there are two feedforward pathways from one cortical area to the next" (what is a "cortical area" in this context?) 14:27 < kanzure> https://www.numenta.com/resources/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/ 14:40 < fenn> there were too many different performance metrics (FLOPS in FP16/32/dense, TOPS, etc.) so i gave up trying to do it with numbers and just squinted at jurvetson's graph. using the data after 1995, the trend line intersects 10^15 in 2035 14:54 < fenn> extrapolating along the same line, the time for 50xA100 GPU equivalent compute to fall to $1000 should be 8 years, or 2031 14:54 < kanzure> the meshcode memory mechanism paper (goult) doesn't reference the "pattern of holes in the perineuronal net" paper (tsien) nor the "the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage" theory of memory 14:54 < fenn> my calculations: 50*10000USD/(10^(8*.3333)) 14:55 < kanzure> "variance in extracellular sulfated proteoglycans" is from https://luysii.wordpress.com/2014/02/23/are-memories-stored-outside-of-neurons/ or https://www.science.org/doi/10.1126/science.1245423 but it's more of a throw-away comment 14:55 < fenn> the factor .33 comes from compute improving 9 orders of magnitude over 27 years 14:56 < kanzure> fenn: what are you calculating? looks like $1,100 to me. 14:56 < fenn> yes 14:56 < fenn> close enough 14:56 < fenn> 8.1 years :) 14:56 < kanzure> oh, 50xA100 GPU equivalent compute for $1100 14:57 < kanzure> ok 14:58 < kanzure> trying to decide if i like https://metaphor.systems/ it's definitely digging up some wacky blog posts that i think google wouldn't show me 14:58 < fenn> so if kurzweil is right, we'll need more like 1000 A100 GPUs to mimic a human brain 14:59 < kanzure> there's this one it found which is kind of funny https://headtruth.blogspot.com/2018/04/reasons-for-doubting-brain-could-do.html basically someone who is looking at feats of human intellect (like the savants) and saying well therefore materialism/reductionism has failed us because existing neuroscience theories can't explain memory (and rather than concluding that we simply don't have a good ... 14:59 < kanzure> ...theory, he concludes reductionism can't solve the problem) 14:59 < kanzure> (warning: crank site) 15:20 < fenn> i probably did something wrong in the "1000 A100" estimate. kurzweil predicts 10^15 FLOPS equals a human brain, (i think,) and in 4.5 to 6.5 years that will cost about $1000, depending on the definition of a FLOP vs a TOP or whatever 15:21 < fenn> it should only require 3 to 13 A100s to mimic a brain, with a similar algorithm to what the brain uses 15:22 < kanzure> actually based on semiconductor chip design cycle timelines, if it's going to be in ~5 years then in theory you should be able to find a chip designer starting the project in about a year given the timelines involved in chip manufacturing 15:22 < fenn> its_happening.gif 15:24 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 15:25 < fenn> https://metaphor.systems/search?q=the+most+well+justified+estimate+of+what+year+AI+will+exceed+human+capabilities+is%3A 15:25 < kanzure> that's cheating >:( 15:25 < fenn> seems like a decent search engine 15:25 < fenn> hans moravec is smarter than me, so yes, it's cheating 15:26 < kanzure> pfft "What I'm trying to say here is that "2035" is just a wild guess, and it might as well be next Tuesday. 15:26 < kanzure> yeah i'm trying to figure out if i like this search engine, it's returning interesting results at least 15:27 < fenn> aha the "kurzweil" graph was actually made by moravec! 15:28 < fenn> https://gwern.net/doc/www/jetpress.org/3d313da208f6eac437cf56b4dca0acc49c93da33.html 15:28 < kanzure> most of kurzweil's book was cribbed from the extropians mailing list and wta mailing list 15:29 < fenn> oh i forgot about monetary inflation 15:54 < kanzure> nectome talks a lot about the synaptic memory engram theory https://nectome.com/the-case-for-glutaraldehyde-structural-encoding-and-preservation-of-long-term-memories/ 15:54 < kanzure> wonder if glutaraldehyde preserves extracellular matrix structure 15:54 -!- codaraxis___ [~codaraxis@user/codaraxis] has quit [Quit: Leaving] 15:55 < lsneff> Honestly, I think gpus are not going to be able to scale like that 15:58 < kanzure> chatgpt seems to be good at organizing information about human brain neuroanatomy (dunno how accurate it is though) 15:58 < hprmbridge> kanzure> https://cdn.discordapp.com/attachments/1064664282450628710/1081727437802582077/image.png 16:09 < kanzure> https://en.wikipedia.org/wiki/Connectogram "Connectograms are [circular] graphical representations of connectomics, the field of study dedicated to mapping and interpreting all of the white matter fiber connections in the human brain." 16:17 < kanzure> huh you can download some connectomes https://braingraph.org/cms/download-pit-group-connectomes/ 16:32 < lsneff> Fruit fly connectome was mostly finished a few years ago 16:36 < kanzure> this one is human 16:36 < kanzure> i don't understand why they would need fMRI data to produce this though. why can't this be done from dissected human brain matter? 16:37 < lsneff> That may be a different kind of connectome 16:37 < lsneff> Instead of per-neuron connectivity, just connectivity between regions of the brain 16:38 < kanzure> (oh i mean tractography MRI thing, which isn't fMRI whoops) 16:46 < lsneff> Right, so it’s not the really interesting kind of connectome 16:51 < fenn> "The most advanced tractography algorithm can produce 90% of the ground truth bundles" implies that there is a method of going from dissected brain tissue to a "true" tractogram 16:52 < fenn> "The existence of these tracts and circuits has been revealed by histochemistry and biological techniques" 16:53 < kanzure> using cartography flatmaps for brain visualization http://larrywswanson.com/?page_id=1415 16:55 < hprmbridge> kanzure> diagram of the rat central nervous system (swanson's "brain map 4.0" project) http://larrywswanson.com/wp-content/uploads/2015/06/BM4-Flatmap-4-Beta-3-rat.png 16:57 < kanzure> zoomable vector diagram http://larrywswanson.com/wp-content/uploads/2015/04/Rat-Flatmap-4-Beta-3.pdf 16:58 < kanzure> hrm this is missing axon tracts 16:59 -!- Jay_Dugger [~jwd@47-185-229-91.dlls.tx.frontiernet.net] has quit [Ping timeout: 255 seconds] 17:00 < kanzure> http://larrywswanson.com/wp-content/uploads/2015/03/Neurome-bilateral.jpg 17:02 < fenn> the whole graph vs adjacency matrix thing again 17:02 < fenn> the last image has no data tho 17:05 < fenn> maps like this could have overlays for each region that highlight the connections when you mouse over regions connected to them. easy enough to do with CSS and transparent pngs 17:06 < fenn> this is about as much detail as i can handle in one picture https://larrywswanson.com/wp-content/uploads/2015/06/Nauta-Karten-flatmap-connections.jpg 17:09 < kanzure> https://cerenaut.ai/2014/05/27/thalamocortical-architecture/ 17:09 < kanzure> there needs to be better brain visualization tools :\ 17:10 < kanzure> just dump the data into gephi and let users query for their favorite module or region and see it in the network 17:13 < kanzure> https://universalprior.substack.com/p/serendipitous-connections-applying 17:14 < kanzure> it found the cognitive consilience paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3243081/ 17:17 < fenn> there are so many overlapping neuroanatomy descriptors (brain regions) that you really need to see it overlaid on the anatomy map 17:18 < fenn> a node and edges graph is losing too much info 17:22 < kanzure> https://cerenaut.ai/2015/12/22/how-to-build-a-general-intelligence-circuits-and-pathways/ 17:22 < kanzure> https://cerenaut.ai/2015/10/30/how-to-build-a-general-intelligence-what-we-think-we-already-know/ 17:28 < fenn> what does the matrix math syntax A(: , j) mean? specifically, what is that colon representing? 17:30 < muurkha> "everything" 17:31 < muurkha> I mean, I don't know what language A(: , j) is, so I'm just guessing 17:32 < kanzure> "In matrix math syntax, the colon notation represents a range of values in a particular dimension of a matrix. For example, the syntax A(:, j) means "all rows of the jth column of matrix A". " 17:33 < fenn> i saw something similar in python today which i also didn't get. related? https://electrek.co/wp-content/uploads/sites/3/2021/08/Screen-Shot-2021-08-19-at-9.59.22-PM.jpg 17:33 < kanzure> that looks like a python slice to me, but maybe numpy does some evil override of that 17:33 < kanzure> or pytorch in this case(?) 17:35 < fenn> yes but what would x[:, :ndigit*2] do? 17:35 < fenn> am i just real dumb and it's a 2d slice? 17:35 < muurkha> yeah, the whole reason Python supports slices with commas between them (and ...) is to support Numpy (well, Numeric) 17:35 < muurkha> yes, it's just a 2-D slice 17:36 < fenn> in the same screenshot there's the notation [::-1] 17:36 < muurkha> that means "backwards" 17:36 < muurkha> .py "fenn"[::-1] 17:36 < fenn> what does :: do? 17:37 < muurkha> start:stop:step 17:37 < muurkha> start defaults to the beginning, stop defaults to the end, and step defaults to 1, except that if step is negative, start defaults to the end and stop defaults to the beginning 17:38 < fenn> thanks 17:38 < muurkha> this is unfortunately very similar to Matlab syntax for the same thing that is semantically different 17:38 < muurkha> it's worth noting that in the case of Numpy the slices are actually aliases to the original array, so making them is very cheap 17:39 < muurkha> I don't know what PyTorch's semantics for that are 17:57 < kanzure> metaphor.systems digs up somewhat older items like this http://danielkeating.blogspot.com/2012/01/could-we-breed-dogs-as-smart-as-people.html (not particularly interesting but it's something google wouldn't have shown) 18:01 < kanzure> https://blog.jtoy.net/the-cognitive-animal-farm/ 18:01 < kanzure> https://blog.jtoy.net/ideas-for-experiments-to-run-on-a-chicken-farm/ 18:02 < kanzure> it finds dvorsky's blog http://www.sentientdevelopments.com/2009/03/uplifting-animals-yes-we-should.html 18:04 < kanzure> http://davidbrin.blogspot.com/2014/09/will-we-uplift-other-species-to-sapience.html 18:11 < kanzure> http://www.sentientdevelopments.com/2009/04/abolitionist-project.html it finds david pearce content too. 18:20 < kanzure> https://www.animalintelligence.org/2008/07/21/lost-parrot-gives-its-name-and-address/ 18:31 < kanzure> someone wrote into the journal of nature in 1887 proposing to breed animals for intelligence https://www.nature.com/articles/036246b0 18:33 < kanzure> "... In 1955, in their new home of Hot Springs, Arkansas, the Brelands opened the I.Q. Zoo, where visitors would pay, in essence, to watch Skinnerian conditioning in action—even if in the form of basketball-playing raccoons." 18:44 < kanzure> https://en.wikipedia.org/wiki/Hundesprechschule_Asra "Before inventing the telephone, Alexander Graham Bell tried to teach his Skye Terrier to talk." 18:44 < kanzure> selecting for chickens laying more eggs selected for bullying behavior https://freethoughtblogs.com/pharyngula/2016/07/12/how-eugenics-fails/ 18:44 < kanzure> seems like someone would have noticed though... 18:48 < kanzure> freitas proposes (by proxy of "jean rostand") "A young [human] embryo has already in the cerebral cortex the nine billion pyramidal cells which will condition its mental activity during the whole of its life. This number, which is reached by geometric progression or simple doubling, after 33 divisions of each cell (2, 4, 8, 16, 32, and so on), could in turn be doubled if we succeeded in ... 18:48 < kanzure> ...causing just one more division -- the 34th." http://www.xenology.info/Xeno/16.1.1.htm ref: Jean Rostand; Can Man Be Modified?; (Basic Books, Inc., N. Y.; 1959). 18:59 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:c9b1:cf4f:f7cf:667d] has quit [Quit: Leaving] 18:59 < kanzure> the moose was domesticated? https://en.wikipedia.org/wiki/Kostroma_Moose_Farm 19:03 < kanzure> "why not test the infinite monkeys on a typewriter in practice?" http://timeblimp.com/?page_id=1493 didn't turn into anything but at least someone bothered to check. 19:09 < kanzure> francis galton 1865 "So far as I am aware, no animals have ever been bred for general intelligence" https://psychclassics.yorku.ca/Galton/talent.htm 19:28 <+gnusha> https://secure.diyhpl.us/cgit/diyhpluswiki/commit/?id=d3ffde00 Bryan Bishop: Revert "fix more" >> http://diyhpl.us/diyhpluswiki/transcripts/2023-03-03-twitter-bayeslord-pmarca/ 19:36 < kanzure> apparently there's no search engine that resolves references like "Science vol 350 pp 780-786 2009" 20:07 < docl> I wonder if there's some way to programatically grow complex nanostructures on the surfaces of balls in a ball mill. they would tend to be worn smooth, so the growth would need to be smooth down to the microscopic level, but that doesn't necessarily mean no complex nanostructures 20:45 * muurkha grows structures on the surfaces of docl's balls 20:49 < fenn> "An 18th-century Englishman named Jedediah Buxton reputedly could multiply three 6-digit numbers in his head almost instantly, but his mind was otherwise dull and he remained a day laborer all his life." <- was there really nobody interested in harnessing this ability for engineering or science? 20:50 < muurkha> Not in the 18th century 20:52 < fenn> they had 7 place logarithm tables in 1795 20:53 < muurkha> did they? I thought those came a bit later 20:53 < fenn> 7 digit* 20:55 < muurkha> what's the difference? 20:55 < fenn> actually i'm not sure 20:55 < fenn> maybe people say 'places' to be unambiguous about whether the leading zero is included or not --- Log closed Sun Mar 05 00:00:01 2023