--- Log opened Mon Apr 29 00:00:40 2024 01:41 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has joined #hplusroadmap 01:42 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 260 seconds] 01:55 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has quit [Remote host closed the connection] 01:55 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 01:55 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has joined #hplusroadmap 02:19 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 02:19 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 02:28 < hprmbridge> kanzure> analog compute stuff https://twitter.com/BasedBeffJezos/status/1784760185371967916 03:46 < fenn> let's see... neuromorphic computing requires exotic components like memristors, or spiking neural networks don't match the application 03:48 < fenn> so instead we're going to use, wait for it... stochastic superconducting quantum thermal fluctuations to sample from non-linear hamiltonian dynamics 03:48 < fenn> raise your hand if you understood that 03:59 -!- A_Dragon [A_D@libera/staff/dragon] has quit [Quit: ZNC - https://znc.in] 04:01 -!- A_Dragon [A_D@libera/staff/dragon] has joined #hplusroadmap 04:49 < hprmbridge> Lev> is this extropic 04:49 < hprmbridge> Lev> are they just playing with ising networks or smth 04:50 < hprmbridge> alonzoc> Yeah it's a little dumb. It's slightly better than standard neuromorphics as I think it hardware accelerates the right thing. No matter our learning algorithm it'll be either sampling a posterior or trying to find the max aposterior. So it makes sense from the perspective of "we want hardware accelerators which will benefit learning algorithms in general". But yeah... Exotic components is not a 04:50 < hprmbridge> alonzoc> win 04:51 < hprmbridge> Lev> even then, the sampling thing? like that's either just RNG or it's something like QSA in which case it sucks andway 04:54 < hprmbridge> alonzoc> SNN accelerators are only useful for SNNs and ANNs if you have some correspondence. You can't accelerate arbitrary learning machines. However I think they are missing some important features which seem to be winners such as error propagation. you'll get that for free within a chip but I'd be interested to see what they're saying about networking these systems 05:05 < kanzure> "Energy-Based Models (EBMs) offer hints at a potential solution, as they are a concept that appears both in thermodynamic physics and in fundamental probabilistic machine learning. In physics, they are known as parameterized thermal states, arising from steady-states of systems with tunable parameters. In machine learning, they are known as exponential families." 05:05 < kanzure> "Exponential families are known to be the optimal way to parameterize probability distributions, requiring the minimal amount of data to uniquely determine their parameters" 05:06 < kanzure> from https://www.extropic.ai/future 05:08 < kanzure> someone making "a monolayer of rigid molecular tripods (adamantane framework with thiol legs" https://docs.google.com/document/d/1WWLfIuKirMCSgpQO75x9aWPK1zH9xB0t4g_V11gHgis/edit 05:08 < hprmbridge> Lev> https://cdn.discordapp.com/attachments/1064664282450628710/1234476431698825338/41598_2023_49559_Fig1_HTML.png?ex=6630df44&is=662f8dc4&hm=b588bbe71dff0f7baef2468537372071a40a9bdf22a9f19b8cec486c27350ac9& 05:08 < hprmbridge> Lev> Their image is nice but it's not a new concept (this is from a simulated annealing paper) 05:09 < kanzure> from the guy who was reviewing nanosystems https://twitter.com/jacobrintamaki/status/1784826396864299248 06:20 < geneh2> energy efficiency is one interesting aspect of thermodynamic compute 06:23 < geneh2> Landauer says the energetic cost of compute goes up with error rate. Digital electronics need a very low error rate, but thermodynamic compute can tolerate more errors 06:24 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 260 seconds] 06:30 < geneh2> I'd be more worried about whether models can be copied. the biggest advantage of digital NNs is you can train them once and copy them 06:30 < geneh2> this may not be possible for analog systems because of hardware variance 06:31 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 06:32 < hprmbridge> Lev> ...? isn't it the number of bit *erasures* which is required for all non reversible compute? 06:32 < hprmbridge> Lev> also, until they give more info on what exactly "EBMs" *are*, their current descr just matches tree-structured simulated annealing in hardware 06:33 < hprmbridge> Lev> Which is energy efficient and fast, but we'll see how good it is for actual use 06:33 < hprmbridge> alonzoc> Generalising models are usually in wide basins so for standard models the hardware variance is probably not a problem especially with a fine tuning pass when deploying to new hardware. 06:38 < hprmbridge> alonzoc> Yeah but it needn't be a physical bit the erasure of a bit is more fundamentally the fact you have a non-invertable map on the state space. Bit flips from noise can be considered more or less free if you have an operation which sets a bit to 0 with 80% prob and 1 with 20% you can pay a lower price 06:39 < hprmbridge> Lev> Oh sure you could use it to nyom erasures stochastically but i'm sus that that's meaningful without significant overhead 06:40 < hprmbridge> Lev> Like this is basically saying the function you're learning will have no avalanche-like behavior during learning 06:40 < hprmbridge> Lev> ensurable for some problems, but if you want to have the advantages generally you'd need some fancy structure beyond that 06:40 < hprmbridge> Lev> that i doubt would be efficiently implementable 07:18 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has joined #hplusroadmap 07:26 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has quit [Remote host closed the connection] 07:28 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has joined #hplusroadmap 07:49 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has quit [Quit: Leaving] 07:59 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has joined #hplusroadmap 08:04 -!- cthlolo [~lorogue@2a02:aa7:4622:9e01:3758:f17a:f413:571d] has quit [Client Quit] 10:50 < hprmbridge> heathal> https://twitter.com/prash_singh/status/1780522588873040316 10:50 < hprmbridge> heathal> .tw 10:50 < hprmbridge> heathal> .tw https://twitter.com/prash_singh/status/1780522588873040316 10:50 < hprmbridge> heathal> it's been awhile 🙂 12:12 < nsh> (the bacteria motor, was posted recently but bears repeating :) 12:12 < nsh> *bacterial 12:55 -!- juri_ [~juri@implicitcad.org] has quit [Ping timeout: 252 seconds] 13:31 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has quit [Quit: Wash your hands. Don't touch your face. Avoid fossil fuels and animal products. Have no/fewer children. Protest, elect sane politicians. Invest ecologically.] 13:33 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has joined #hplusroadmap 13:36 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 13:36 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 14:40 < fenn> more data needed, but it seems like magnesium threonate supercharges world detail and narrative depth of dreams 14:40 < fenn> i don't know why i waited so long to try this 14:41 < L29Ah> does it supercharge your hair? 14:42 < fenn> dunno yet, how would i measure this? 14:42 < L29Ah> the choice of anion is weird if you wanted magnesium 14:42 < fenn> it's for brain uptake 14:42 < fenn> threose is a sugar 14:43 < fenn> (it's not threonine) 14:44 < L29Ah> oh, i thought threonate is a salt of theonic acid 14:44 < L29Ah> threonic 14:46 < fenn> yes, it is 14:47 < fenn> i had never heard of threose or threonic acid and assumed it was an amino acid chelate like the many other magnesium supplements 14:47 < L29Ah> https://www.ncbi.nlm.nih.gov/corecgi/tileshop/tileshop.fcgi?p=PMC3&id=416055&s=94&r=1&c=1 magic 16:11 < geneh2> @Lev, the Landauer limit depends on temperature. One needs potential energy barrier between the two states higher than some value in order to prevent the state switching due to thermal motion. But for digital computation, the barrier needs to be a bit higher so that the rate of errors due to thermal motion is low enough that bit flips don't cause problems. 17:30 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Remote host closed the connection] 17:30 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 18:03 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has quit [Ping timeout: 268 seconds] 18:16 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has joined #hplusroadmap 19:00 < hprmbridge> kanzure> https://www.globalcryonicssummit.com/ 19:50 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has quit [Ping timeout: 264 seconds] 20:41 -!- mxz__ [~mxz@user/mxz] has joined #hplusroadmap 20:42 -!- mxz [~mxz@user/mxz] has quit [Ping timeout: 252 seconds] 20:42 -!- mxz__ is now known as mxz 20:42 -!- mxz_ [~mxz@user/mxz] has quit [Ping timeout: 260 seconds] 21:43 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 260 seconds] 21:52 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 21:52 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 21:54 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 22:48 -!- mxz_ [~mxz@user/mxz] has joined #hplusroadmap --- Log closed Tue Apr 30 00:00:41 2024