--- Log opened Tue Nov 12 00:00:47 2024 00:23 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 00:26 -!- geneh2 [~cam@pool-173-66-187-118.washdc.fios.verizon.net] has joined #hplusroadmap 00:26 -!- balrog [znc@user/balrog] has joined #hplusroadmap 00:47 < nsh> by virtue of the same property that any periodic continuous function can be decomposed into scaled versions of the different ways of wiggling with different periods, any stochastic process can be represented as a linear sum of randomly wigging in basic directions that together cover all the ways you can stumble about 00:52 < nsh> it's just projection (onto an arbitrary basis) to some extent 00:53 < nsh> sin and cos are at right angles (90 degree phase offset) and so form a basis, the basis for KLT is also orthonormal 00:53 < nsh> (in FT, KLT respectively) 00:55 < nsh> .gptc develop and intuition for the Karhunen-Loeve Transform from the starting point of an elementary understanding of the application of Fourier analysis to time-varying signals 00:55 < nsh> *an 00:55 < gptpaste> ​"To develop an intuition for the Karhunen-Loeve Transform (KLT), let's start with our understanding of Fourier analysis for time-varying signals and gradually build up to the KLT concept.1. Fourier Analysis Recap: - Fourier analysis decomposes a signal into a sum of sinusoidal components. - It represents a signal in terms of frequency components. - The basis function - https://gist.github.com/Epivalent/234599e609c9f94abcf9bf9a3c6fa1f2/raw 00:58 < nsh> .gptc -c expand upon the covariance matrix and its eigendecomposition (giving a brief recap in the rudimentary theories underlying both) and discuss briefly its applications in machine learning 00:59 < gptpaste> ​"Certainly! Let's expand on the covariance matrix, its eigendecomposition, and their applications in machine learning. We'll start with a brief recap of the underlying concepts and then discuss their relevance in machine learning.1. Covariance Matrix:Recap:- The covariance matrix is a square matrix that captures the pairwise covariances between variables in a dataset.- Fo - https://gist.github.com/Epivalent/d452d7ba3629cbaed080a8291eb75876/raw 01:03 < nsh> .t https://manlius.substack.com/p/phantom-oscillations-in-your-data 01:03 < saxo> Phantom oscillations in your data? - Complexity Thoughts 01:07 < nsh> the most important different from fourier analysis is that the basis functions are random (independently distributed) and minimise the error when you truncated the expansion of the series because the process of choosing them depends as it does on how things vary one with another in the input distribution 01:07 < nsh> *difference 01:08 < nsh> this is one of the many subjects where the first three decent looking video lectures/expositions are from indian academics, which happens a lot in certain subjects and bodes better for them than the countries one might have expected fifty years ago 01:09 < nsh> this one looks like it could have been made 80 years ago: https://www.youtube.com/watch?v=uTwm3Zv1HfA 01:09 < nsh> .t 01:09 < saxo> Lecture - 16 K-L Transform 01:12 < nsh> huh, yud actually did something unobjectionable: https://x.com/ESYudkowsky/status/1734772533373939784 01:13 < nsh> see, anyone can temporarily assume the form of a person who might ought continue existing by just keeping all their own thoughts to themselves (he says, demonstrating by counterexample) 01:19 < nsh> .gpt -c discuss the decorrelation property of the K-L transform, its utility and applications and provide some intuition as to why it's possible to decorrelate the Z variables 01:19 < gptpaste> ​null# Tue Nov 12 09:19:27 AM UTC 2024 - USER: develop and intuition for the Karhunen-Loeve Transform from the starting point of an elementary understanding of the application of Fourier analysis to time-varying signals - https://gist.github.com/Epivalent/949b35d77aff7a67db956cb3734ed3ea/raw 01:19 < nsh> d'oh 01:24 < nsh> the way to visualise it is this: say you have something that has a somewhat arbitrary shape (distribution). let's imagine a cloud up in the sky. this cloud is a distribution of water droplets at different densities about some relatively arbitrary shape, which could extent in an arbitrary number of directions but we'll just imagine the three we're used to. 01:26 < nsh> we have a measuring thing that is three rulers at right-angles to one another measuring from 0 to 1m. there's a point at which we can centre that coordinate system and stretch the rulers that is particular to that cloud only 01:26 < nsh> and where the average of sum of the squares of the resulting coordinates is minimised 01:26 < nsh> (over all the droplets) 01:30 < nsh> this is also very good: https://x.com/ESYudkowsky/status/1718654143110512741 01:30 < nsh> the actual dangers (as always) are the hum-drum ones 01:30 < nsh> they involve people failing to sufficiently venerate and honour the ways of their ancestors 01:30 < nsh> such as understanding computing from first principles and making the necessary sacrifices for a good harvest 01:31 < nsh> okay the first few paragraphs were good. not writing book-length twitter essays is another way to revere the past 01:53 < nsh> (Karhunen, [not co-]incidentally was involved in the creation of the first Finnish computer: ESKO) 01:53 < fenn> thanks nsh. i already understand PCA. from chatgpt's description it sounds like the only difference is that it's an infinite sum of covariance components rather than a finite number 01:54 < fenn> it's unclear whether these basis functions are gaussians or left up to the implementer or what 01:54 < nsh> in practice it's finite 01:55 < fenn> well so is everything 01:56 < fenn> PCA can only have up to, uh, the number of features? 01:56 < nsh> (you only ever have finite input data and the point is to cover as much of the input with few terms of the series but the exact degree of trade off (loss of resolution and whitening) that's optimal will be application-specific) 01:57 < nsh> (in theory the fourier transform is also infinite) 01:57 < fenn> .gptc -with https://gist.github.com/Epivalent/d452d7ba3629cbaed080a8291eb75876/raw now explain how KLT is different from PCA 01:57 < fenn> did i do that right 01:58 < gptpaste> ​"To explain how the Karhunen-Loève Transform (KLT) differs from Principal Component Analysis (PCA), let's first recap both concepts and then highlight their differences:1. Recap of KLT: - KLT is a method for representing a stochastic process as a series of orthogonal functions. - It's based on the eigendecomposition of the covariance function of the process. - KLT p - https://gist.github.com/Epivalent/5eceef8807d2d56ce568223dc0e234fd/raw 01:59 < nsh> i think so 02:00 < nsh> PCA is more general, KLT assumes that the input is more random (made up of Gaussian things) 02:01 < nsh> also continuous setting 02:01 < nsh> short discussion here: https://dsp.stackexchange.com/questions/49210/what-is-the-difference-between-pca-and-karhunen-loeve-kl-transform 02:03 -!- TMM [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 02:03 -!- TMM [hp@amanda.tmm.cx] has joined #hplusroadmap 02:04 < fenn> claude said "eigenfunctions" not gaussians 02:04 < fenn> which i assume means "anything you can possibly imagine" 02:07 < fenn> maccone for example is looking for sine waves in colored noise 02:11 < fenn> i assume this means that there's ONE eigenfunction that represents the detected sine wave carrier signal 02:35 < fenn> reminds me of that tegmark AI architecture (kolmogorov arnold networks) https://arxiv.org/abs/2404.19756 02:36 < fenn> instead of variable connection weights between nodes/neurons they learn arbitrary activation functions 02:37 < fenn> far too much name dropping for me 03:15 < fenn> i don't know how i survived without parcellite (selection-buffer/clipboard with history) 03:42 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has joined #hplusroadmap 04:39 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has quit [Ping timeout: 260 seconds] 04:57 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Read error: Connection reset by peer] 04:57 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 05:03 < kanzure> what is the lore around why claude sounds smarter? 05:44 < fenn> broader education, more personality, probably better end-user testing 05:44 < fenn> https://xkcd.com/1838/ 05:45 < fenn> supposedly all the smart non-altman-cultist people jumped ship from OpenAI long ago 05:46 < kanzure> hey i happen to enjoy a good cult 06:49 -!- gl00ten [~gl00ten@bl5-237-194.dsl.telepac.pt] has quit [Ping timeout: 264 seconds] 07:36 < fenn> 42 million FPS image recognition with convolutional differentiable logic gate networks: https://arxiv.org/abs/2411.04732 https://video.twimg.com/ext_tw_video/1856022581993775104/pu/vid/avc1/1280x720/yxBii8TXP_4tki8F.mp4 07:37 < fenn> the blue stuff in the video is the probability distribution of each logic gate, so it can learn the logic gates in a continuous fashion with gradient descent 07:42 < fenn> this is a 2000x speedup over previous state of the art 07:44 < fenn> or 5000x if you don't care about accuracy so much 08:03 -!- gl00ten [~gl00ten@194.117.18.100] has joined #hplusroadmap 08:18 < hprmbridge> alonzoc> Is this really that new I had a toy project that was basically this a while ago it pops straight out of https://arxiv.org/abs/1805.11813 if you try to do gradient descent through circuits using their notion of naive probability 08:18 < hprmbridge> alonzoc> Amazing practicality results tho 08:21 < hprmbridge> alonzoc> The choice of initialisation make a lot of sense 08:22 < hprmbridge> alonzoc> Also because the range is fixed a fixed precision approximation should do really well.. 08:22 < hprmbridge> alonzoc> @Lev time for you to write some more insane GPU code 08:23 < hprmbridge> Lev> nu 09:06 -!- gl00ten [~gl00ten@194.117.18.100] has quit [Ping timeout: 252 seconds] 10:22 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap 10:33 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has quit [Read error: Connection reset by peer] 11:33 -!- TMM [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 11:33 -!- TMM [hp@amanda.tmm.cx] has joined #hplusroadmap 11:57 < kanzure> https://gnusha.org/pi/bitcoindev/CABaSBaz13bUoHCupXYhmX+yS0dn89f80yx8ZD3uO5-1RiLZJCQ@mail.gmail.com/T/#u 12:22 < kanzure> "All the estimates I can find for average receptors per synapse in human CNS range from 140-300." 15:37 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has quit [Ping timeout: 244 seconds] 16:26 < hprmbridge> kanzure> young follicles can rejuvenate old oocytes? https://x.com/noor_siddiqui_/status/1856064496235884666 16:26 < hprmbridge> kanzure> https://www.biorxiv.org/content/10.1101/2024.03.11.584343v1 16:27 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 16:29 -!- etc-vi [~etc-vi@user/meow/girlchunks] has quit [Quit: nya] 16:31 -!- etc-vi [~etc-vi@user/meow/girlchunks] has joined #hplusroadmap 17:05 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 18:01 -!- darsie [~darsie@84-112-12-36.cable.dynamic.surfer.at] has quit [Ping timeout: 248 seconds] 19:28 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap 20:12 -!- mxz__ [~mxz@user/mxz] has joined #hplusroadmap 20:12 -!- mxz_ [~mxz@user/mxz] has quit [Ping timeout: 260 seconds] 20:13 -!- mxz [~mxz@user/mxz] has quit [Ping timeout: 248 seconds] 20:13 -!- mxz__ is now known as mxz 21:14 -!- Hooloovoo [~Hooloovoo@hax0rbana.org] has quit [Quit: ZNC 1.8.2+deb2+deb11u1 - https://znc.in] 21:17 -!- Hooloovoo [~Hooloovoo@hax0rbana.org] has joined #hplusroadmap 22:19 -!- mxz_ [~mxz@user/mxz] has joined #hplusroadmap 22:21 -!- TMM [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 22:21 -!- TMM [hp@amanda.tmm.cx] has joined #hplusroadmap --- Log closed Wed Nov 13 00:00:48 2024