--- Log opened Thu Nov 11 00:00:29 2021 00:16 -!- acertain [sid470584@id-470584.hampstead.irccloud.com] has quit [Ping timeout: 264 seconds] 00:20 -!- acertain [sid470584@id-470584.hampstead.irccloud.com] has joined #hplusroadmap 00:55 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 01:23 -!- acertain [sid470584@id-470584.hampstead.irccloud.com] has quit [Ping timeout: 250 seconds] 01:23 -!- acertain_ [sid470584@id-470584.hampstead.irccloud.com] has joined #hplusroadmap 01:24 -!- sknebel is now known as sebsel 01:25 -!- sebsel is now known as sknebel 03:16 -!- faceface [~faceface@user/faceface] has joined #hplusroadmap 05:40 -!- acertain_ [sid470584@id-470584.hampstead.irccloud.com] has quit [Read error: Connection reset by peer] 05:41 -!- strages [uid11297@helmsley.irccloud.com] has quit [Ping timeout: 264 seconds] 05:41 -!- redlegion [sid429547@ilkley.irccloud.com] has quit [Ping timeout: 268 seconds] 05:42 -!- cpopell [sid506802@tinside.irccloud.com] has quit [Ping timeout: 256 seconds] 05:42 -!- s0ph1a [sid246387@helmsley.irccloud.com] has quit [Ping timeout: 264 seconds] 05:42 -!- potatope_ [sid139423@id-139423.lymington.irccloud.com] has joined #hplusroadmap 05:42 -!- EnabrinTain_ [sid11525@helmsley.irccloud.com] has joined #hplusroadmap 05:42 -!- RubenSomsen [sid301948@user/rubensomsen] has quit [Ping timeout: 268 seconds] 05:42 -!- potatope [sid139423@lymington.irccloud.com] has quit [Ping timeout: 260 seconds] 05:42 -!- potatope_ is now known as potatope 05:42 -!- EnabrinTain [sid11525@helmsley.irccloud.com] has quit [Ping timeout: 256 seconds] 05:42 -!- EnabrinTain_ is now known as EnabrinTain 05:42 -!- acertain_ [sid470584@id-470584.hampstead.irccloud.com] has joined #hplusroadmap 05:42 -!- FelixWeis [sid154231@hampstead.irccloud.com] has quit [Ping timeout: 264 seconds] 05:42 -!- yuanti [sid16585@tinside.irccloud.com] has quit [Ping timeout: 264 seconds] 05:43 -!- strages [uid11297@2a03:5180:f:1::2c21] has joined #hplusroadmap 05:43 -!- RubenSomsen [sid301948@2a03:5180:f:1::4:9b7c] has joined #hplusroadmap 05:43 -!- RubenSomsen [sid301948@2a03:5180:f:1::4:9b7c] has quit [Changing host] 05:43 -!- RubenSomsen [sid301948@user/rubensomsen] has joined #hplusroadmap 05:43 -!- cpopell [sid506802@2a03:5180:f::7:bbb2] has joined #hplusroadmap 05:43 -!- yuanti [sid16585@tinside.irccloud.com] has joined #hplusroadmap 05:44 -!- redlegion [sid429547@id-429547.ilkley.irccloud.com] has joined #hplusroadmap 05:44 -!- s0ph1a [sid246387@helmsley.irccloud.com] has joined #hplusroadmap 05:44 -!- FelixWeis [sid154231@id-154231.hampstead.irccloud.com] has joined #hplusroadmap 06:17 -!- s0ph1a [sid246387@helmsley.irccloud.com] has quit [Ping timeout: 268 seconds] 06:17 -!- EnabrinTain [sid11525@helmsley.irccloud.com] has quit [Ping timeout: 250 seconds] 06:18 -!- RubenSomsen [sid301948@user/rubensomsen] has quit [Ping timeout: 264 seconds] 06:18 -!- yuanti [sid16585@tinside.irccloud.com] has quit [Ping timeout: 256 seconds] 06:18 -!- acertain_ [sid470584@id-470584.hampstead.irccloud.com] has quit [Ping timeout: 240 seconds] 06:18 -!- redlegion [sid429547@id-429547.ilkley.irccloud.com] has quit [Ping timeout: 264 seconds] 06:18 -!- FelixWeis [sid154231@id-154231.hampstead.irccloud.com] has quit [Ping timeout: 264 seconds] 06:19 -!- potatope [sid139423@id-139423.lymington.irccloud.com] has quit [Ping timeout: 240 seconds] 06:19 -!- cpopell [sid506802@2a03:5180:f::7:bbb2] has quit [Ping timeout: 264 seconds] 06:19 -!- strages [uid11297@2a03:5180:f:1::2c21] has quit [Ping timeout: 264 seconds] 06:22 -!- yuanti [sid16585@tinside.irccloud.com] has joined #hplusroadmap 06:22 -!- EnabrinTain [sid11525@helmsley.irccloud.com] has joined #hplusroadmap 06:22 -!- strages [uid11297@helmsley.irccloud.com] has joined #hplusroadmap 06:23 -!- redlegion [sid429547@ilkley.irccloud.com] has joined #hplusroadmap 06:23 -!- acertain_ [sid470584@hampstead.irccloud.com] has joined #hplusroadmap 06:31 -!- s0ph1a [sid246387@helmsley.irccloud.com] has joined #hplusroadmap 06:32 < lsneff> There are various ways of using pytorch, etc to run SNNs 06:34 -!- FelixWeis [sid154231@hampstead.irccloud.com] has joined #hplusroadmap 06:43 -!- cpopell [sid506802@tinside.irccloud.com] has joined #hplusroadmap 06:43 -!- potatope [sid139423@lymington.irccloud.com] has joined #hplusroadmap 06:44 -!- RubenSomsen [sid301948@user/rubensomsen] has joined #hplusroadmap 06:50 < muurkha> lsneff: what have you found works best for you? 07:38 < lsneff> The lab I’m part of uses pytorch 08:14 < fltrz_> plenoptic imaging seems to use lenslet arrays or multiple cameras. thats *not* what I propose. My idea uses a conventional optical axis without lenslet array (2 or more lenses), but does use multiple lenses but sequentually placed. 08:15 < fltrz_> also seems like plenoptic imaging requires recording on a digital sensor with postprocessing 08:17 < fltrz_> whereas my (still unwritten) proposal should also work with naked eye 08:24 < muurkha> fltrz_: it sounds like your approach still won't have a whole volume in focus on a plane, the way a pinhole camera does, will it? 08:24 < muurkha> you said it's scaling down a 3D volume instead of a 2D area 08:25 < muurkha> modulo the nonlinearity of Z, isn't that what a regular lens does? is your objective to linearize Z? 08:27 < fltrz_> muurkha: unlike pinhole camera and unlike telecentric imaging theres no micro aperture, the volume really should be in focus. consider using a normal camera and taking a picture of a garden 15+ meters away. the whole garden seems in focus. Technically thats impossible, yet its all in focus *up to sensor resolution* 08:29 < fltrz_> muurkha: no scaling UP a small volume homothetically, and placing that upscaled image beyond a hyperfocal distance, so its in focus, just like a picture of a garden, or of a mountain range or the stars 08:29 < fltrz_> muurkha: what do you mean with "nonlinearity of Z" what does Z refer to? 08:33 < fltrz_> muurkha: well a lens places an input x_i (or just x in short) at an output x_o as follows: x_o(x, L, f) = ((L+f)*x-L^2)/(x+f-L) ; where x, x_o, L are positions along the X axis L being the position of the lens 08:34 < fltrz_> this is obviously not linear in x. for a fixed x the magnification is independent of the height of an object at x. but the magnification depends on the depth x 08:35 < fltrz_> nesting this function repeatedly, but with L_1,f_1 ; L_2,f_2 ; ... gives the general imaging behaviour of an N-ideal-lens system 08:37 < fltrz_> the rational polynomial (in original input x before the first lens) has coefficients in terms of these L's and f's. so you can set unwanted terms to 0. the same for the lateral magnification as a function of x, L's and f's 08:40 < fltrz_> choosing the same magnification factor laterally and depthwise, and eliminating all coefficients that don't correspond to a simple linear 3D scaling with a depth shift, teaches us how to select f's and L's for a wanted 3D scaling and shift, so that you can build a magnifier or microscope that enlarges a volumetric scene while keeping the magnified image of the intended volume beyond a hyperfocal 08:40 < fltrz_> distance 08:41 < muurkha> by nonlinearity of Z I mean that with a regular convex lens or concave mirror, when you move the sensor plane a constant distance closer to the lens, the plane of things in focus in the scene moves a non-constant distance (and at some point ceases to exist entirely) 08:41 < muurkha> as you say, not linear in x 08:41 < fltrz_> right, thats the nonlinearity of a single lens (x_o function I wrote above) 08:42 < muurkha> I was using Z because that's what we call that dimension in computer graphics, is all 08:42 < fltrz_> yeah, after expressing my confusion I understood the Z as depth 08:43 < fltrz_> like z-buffer or depth buffer 08:43 < muurkha> right 08:43 < muurkha> I'm pretty ignorant about optics 08:43 < fltrz_> so if you look at the x_o formula, it is clear it can not be made linear because the x in the denominator has a fixed coefficient 08:45 < muurkha> note that if you choose a coordinate system such that L+f=0 that becomes just -L²/(x+2f), so the nonlinearity is just a hyperbola, a translated -L²/x 08:45 < fltrz_> but if you nest x_o(x_o(x, L2, f2), L1, f1) you get more complicated expressions but thankfully the L's and f's start appearing in the unwanted coefficients, so we can *set them to 0* by selecting focal lengths and positions of a multilens system 08:46 < fltrz_> yes 08:46 < fltrz_> @ hyperbola 08:46 < muurkha> I'd worked that out geometrically a few weeks ago but wasn't confident that it was correct 08:48 < muurkha> so I guess I think of a regular lens as producing a perfectly-in-focus 3-D volume that's an image of the outside world 08:48 < muurkha> it's just that it suffers from this nonlinear distortion 08:48 < fltrz_> right! 08:49 < muurkha> where the images of further-away things are smaller in all three dimensions 08:49 < fltrz_> it seems to me that nobody every thought it possible to design a multilens system without such distortion 08:53 < fltrz_> like how Kepler kept trying to fit fancy oval equations etc to Brahe's data (back then many of the ovaloid types of shapes were a hot topic in mathematics, like a hammer looking for a nail), and only after lots of frustration half-jokingly tried fitting a classic ellipse from antiquity. He incorrectly assumed his predecessors would have already tried it, and thus incorrectly concluded it couldn't be 08:53 < fltrz_> simple ellipses. So half jokingly he tried it anyway and discovered Kepler's laws... 08:56 < muurkha> what's the benefit of eliminating the distortion in hardware? 08:57 < muurkha> I mean you still have occlusion and defocus, right? 08:58 < fltrz_> it just makes it easier to design for constant (volumetric) magnification instead of planar magnification, where each plane has its own magnification. Once the image is a simple 3D scaling of the input object, it can be placed beyond the hyperfocal distance, so its all in focus, just like the whole depth of a garden or mountain ranges are "in focus" 09:00 < fltrz_> muurkha: the goal of my setup is just so that people can work (hopefully with arm and handlike micromanipulators) under a magnifier, while being able to choose what to look at without have to refocus all the time. like having working with your hands to hold workpieces, tools, parts, ... so that people can do things comfortably in 3D under a magnifier 09:01 < fltrz_> the goal is not to circumvent occlusions. but it is the goal to minimize focus to the point as if you are working with tools at a table, 09:01 < fltrz_> minimize *defocus* 09:03 < fltrz_> I believe theres plenty of untapped potential in the low / less educated people (hobbyists, artists, ...) to participate in microfabrication / prototyping / ... if only they could work comfortably on smaller scales 09:04 < fltrz_> and it would increase the comfort of the more educated whenever they wish to say assemble the parts of a prototype in a small scale 09:06 < fltrz_> if you look at things like microminiature art, levsha, ... artists etc have been doing such things with conventional microscopes, but having a volume in focus will be much more welcoming for the populace at large 09:06 < muurkha> yeah 09:06 < muurkha> the w&m levsha youtube channel is great; is that what you mean? 09:07 < fltrz_> if its all optical, the cost decreases since it doesn't necessarily need imaging sensors and GPU's to do number crunching on some esoteric imaging modality 09:07 < muurkha> is it possible I'm not understanding the benefits? if you measure the light at a point between the lens and the spatial image of a light point source, don't you still see some light from that source? 09:07 < fltrz_> yes I mean both that specific channel as well as the Levsha (left handed) mythology and USSR art form 09:08 < muurkha> it's hard for an optical system to be lower cost than software 09:08 < muurkha> although a friend of mine is 3-D printing optics for his lab with two-photon stereolithography 09:09 < fltrz_> the esoteric imaging modalities with number crunching typically still need hardware optical system, think of focus stacking setups, digital holographic imaging etc 09:09 < muurkha> not familiar with the USSR art form 09:10 < fltrz_> search terms would be "levsha microminiature" 09:10 < muurkha> thanks 09:11 < muurkha> I guess I think that if you have a converging cone of light from the lens system onto the image of a point source, that light still inevitably adds noise to any image pixel you sample within that cone 09:11 < muurkha> so in that sense you still have "defocus" 09:13 < fltrz_> basically scientists (most of them I think) using their job skills to make miniature art under microscopes. when they want to place something or carve something etc they basically only when holding their breath, use calm to have a lower hart rate, and then only move between hart beats because its uncontrollable noise on muscle movement 09:13 < muurkha> do you know about confocal microscopy? 09:13 < muurkha> yeah, but that's a question of manipulators, not sensors 09:14 < fltrz_> yes I know about confocal microscopuy, but again its not real time, I want to enable tinkerers to fool around real time on a small scale 09:15 < muurkha> I agree that that's an important objective 09:15 < fltrz_> in the general case, to translate macroscopic skills, we want the volume to be in focus 09:16 < fltrz_> otherwise we force the user to adapt their observatory behaviour to the point intuitive operation is lost 09:17 < muurkha> but I'm not clear on how what you're proposing would eliminate or reduce defocus 09:17 < muurkha> maybe you could do some raytracing to simulate some imagery as it'd be produced by your system 09:18 < muurkha> for me, microscopic defocus is useful, because I don't have binocular fusion. but I'm a minority 09:18 < fltrz_> take your average cell phone camera, if the scene is distant enough, it all seems in focus, and yes, there is still a *circle of confusion* (pill box convolution, the cone you describe) beyond the hyperfocal distance, its just the the radius of this circle has approached the sensor resolution 09:19 < fltrz_> thats interesting, can you elaborate how defocus aids people without binocular fusion? 09:20 < muurkha> well, you can tell what's in the plane that's in focus, and what isn't, because what isn't isn't in focus 09:21 < fltrz_> so there is still a circle of confusion but its dimension is similar to sensor resolution, thus a tree 15m away and a tree 40m away both seem in focus 09:21 < muurkha> and you can move the focal plane to see what's further or closer 09:21 < muurkha> right 09:22 < fltrz_> so my proposal is to magnify in a mathematical sense in 3 directions, and displace the enlarged scene beyond the typical hyperfocal distance 09:22 < fltrz_> then you can use your eyes or a camera module to observe 09:23 < fltrz_> now "magnify in a mathematical sense in 3 directions" is a deceptively simple statement, since we both agree a single lens results in a nonlinearity 09:24 < fltrz_> and pessimisitally a nonlinearity + another nonlinearity + ... does not in general result in a linearity 09:26 < fltrz_> so to achieve that we just need to put the mapping behaviour of a single lens with focal length and position in a CAS system, and then consider the double iterated, triple iterated, etc systems in sequence until it is mathematically possible to meet the requirements 09:26 < muurkha> computer algebra system system? 09:26 < fltrz_> then you know the least number of lenses necessary 09:26 < fltrz_> yes 09:27 < fltrz_> I hate when lesser used acronyms but the final object type as the last letter. 09:27 < muurkha> your retina or a camera sensor is planar though 09:28 < fltrz_> so? we don't want infinite x infinite pixels 09:28 < muurkha> so they image they'll measure is two-dimensional 09:28 < fltrz_> so we implicitly tolerate some blur 09:29 < muurkha> I'm pretty sure you can design an optical system that has the geometrical effect you are describing, just not sure how it improves over just a regular lens 09:29 < fltrz_> yes, think of it this way: a biologist buys or assembles a focus stacking rig. he places a dead hairy fly, he takes hundreds of pictures focussed at different depths, then uses focus stacking software to get the final image. 09:29 < fltrz_> this could be done with a system of lenses, without focus stacking 09:30 < fltrz_> and it would be real time 09:31 < muurkha> how does removing the nonlinearities help you to do it faster? don't you still have to move your focal plane through hundreds of depths? 09:32 < fltrz_> removing the nonlinearities allows you to scale it up without distortion 09:32 < muurkha> the fact that the depths all have the same magnification and are linearly rather than hyperbolically related to the object depth seems like it could be somewhat useful but not in terms of speed 09:33 < muurkha> more in terms of not wasting sensor pixels 09:33 < fltrz_> the speed difference is because you no longer need to combine images 09:34 < muurkha> you mean, you get pixels focused at many points densely distributed throughout a 3-D volume, rather than a single slice through it, onto a single camera focal plane? 09:35 < muurkha> that sounds like the opposite of not having distortion, it's more like having a maximum amount of distortion 09:36 < muurkha> do you know how to write a raytracer? 09:37 < muurkha> I remember it was super intimidating to me until I did it, and then I realized it was actually pretty simple 09:37 < fltrz_> so to state it differently, suppose you somehow have sparse cloud of tiny point sources of light, they should all map on the sensor plane as disks of diameter at most roughly the single color channel resolution 09:38 < muurkha> yeah, that sounds like "pixels focused at many points densely distributed throughout a 3-D volume on a single camera focal plane" 09:39 < fltrz_> I may eventually write a ray tracer, or even calculate diffraction through the system 09:39 < muurkha> that sounds potentially more interesting than what you were describing before 09:39 < muurkha> yeah, you'd eventually need to calculate diffraction 09:39 < fltrz_> well, I thought that was what I was describing all the time 09:40 < fltrz_> but initially I'm not going for diffraction limited, since it would already be useful at mesoscale miniature tinkering 09:40 < muurkha> I think that if you linearly map the 3-D volume you're magnifying onto another 3-D volume, then a 2-D slice through that volume will map to a 2-D slice through the volume you're magnifying 09:40 < muurkha> so tiny point sources of light that are far from the projected 2-D slice will be out of focus 09:41 < muurkha> diffraction potentially offers an orders-of-magnitude performance improvement for that kind of thing, as you know. but you're describing a geometrical-optics system where diffraction is just a source of error 09:42 < fltrz_> the aperture choise is what trades circle of confusion versus diffraction 09:42 < muurkha> right 09:43 < fltrz_> choise? im native english wtf 09:43 < muurkha> heh 09:43 < muurkha> I think ray-tracing a simulation of your system is probably the immediate next step so you can see how it performs in simulation 09:43 < muurkha> and, as importantly, show it to other people so they can understand 09:44 < fltrz_> the small points will still be "in focus" with the image sensor / retina; if the enlarged image is placed beyond the *hyperfocal distance of that camera / eye * 09:45 < fltrz_> I think thats right 09:45 < fltrz_> but before ray tracing I should simply write up the steps in LaTeX / math notebook 09:48 < fltrz_> this discussion with you has already improved the rough outline sketch for the writeup. you asked how distortion free improved the speed, which it doesnt of course, its just a requirement. so I should start the article with the set of requirements (the niche of the setup) I place on the device. Then each of them is treated as a mathematical condition 09:48 < fltrz_> muurkha: I'll make sure you get copy as well 09:50 < fltrz_> muurkha: the raytracer would be super useful for simulating positioning calibration for imperfect components: decenter, errors on focal length, ... 09:51 < fltrz_> having a bunch of equations telling you how to place your lenses is of little use in telling you how to compensate for specification error on the components 10:02 < muurkha> cool :) 10:02 < muurkha> if you're familiar with Octave it might be the easiest way to do a raytracer. otherwise maybe numpy 10:08 < fltrz_> since you mentioned you were surprised to find out how easy it was, it may be because you followed / read a set of papers / tutorials / ... versus another set. do you have recommendations to share? 10:11 < muurkha> not useful recommendations, I think. fundamentally the issue is that lots of stuff written about raytracing is about how to get photorealistic images, and there's no end to the clever tricks you can do there 10:12 < muurkha> but you don't need a lot of clever tricks, and reading papers and tutorials can mislead you into thinking that you do 10:12 < fltrz_> I thought perhaps you'd found some "raytracing made simple" article with a minimalist raytracer architecture 10:13 < muurkha> well, I did find a few, but then I didn't follow them :) 10:13 < fltrz_> you basically reinvented the wheel to be sure you understood each step? 10:14 < muurkha> not really, I mean, I was using knowledge I'd read before. and for the intersection test I had once understood the equation but I didn't when I copied and pasted it: http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytracer 10:15 < fltrz_> oh you are kragen? 10:15 < muurkha> yeah 10:15 < muurkha> that was designed to make sort of photorealistic images rather than analyze lens systems 10:15 < fltrz_> I just read some of your comments a few hours ago, on the HN discussion on lapsed physicists 10:15 < fltrz_> mostly agreed 10:15 < muurkha> heh, I hope I wasn't a jerk 10:16 < fltrz_> heh, we're all entitled to our own opinions, and I think mine aren't particularly shared by most 10:17 < muurkha> later I wrote this much shorter raytracer in Clojure: http://canonical.org/~kragen/sw/dev3/circle.clj 10:17 < fltrz_> in my opinion, the root of the problem has little to do with physics, and more with economic model of society 10:17 < muurkha> oh? 10:18 < fltrz_> for example, I think most people would describe me as a "lapsed physics *student* " 10:18 < muurkha> 39 lines of Clojure 10:18 < fltrz_> thats too short to be possible ;) 10:18 < muurkha> oh, then you probably *do* know Octave, so you should use that ;) 10:18 < fltrz_> yes i know octave 10:18 < fltrz_> personally, I don't consider myself lapsed, just unsupported 10:19 < fltrz_> or rather, self-supported, so I work in a car factory manual labour about a third of the year, and then I live as cheap as possible and read and learn, and work on my own (theoretical) physics 10:20 < fltrz_> so from my perspective I'm still doing physics 10:20 < muurkha> sure. you might have noticed I talked about a few people doing that kind of thing ;) 10:21 < fltrz_> to solve my "problems", is to solve the problems of the bulk of the populace, so it has nothing to do with physics. 10:21 < muurkha> anyway, fundamentally, a raytracer just finds the nearest intersection between a ray and (some numerical approximation of) scene geometry, and the normal of scene geometry at that point, and then traces a new ray from that point in some other direction determined by, say, Snell's law 10:21 < fltrz_> althought the problems trickle down and certainly manifest inside universities etc, power structures , ... but that is not the root of the problem 10:22 < muurkha> and if you allow iterative algorithms that's not a hard thing to do 10:22 < fltrz_> is phase information or intensity kept as it propagates? 10:22 < muurkha> then you invoke it with different rays and summarize the results in some way 10:23 < muurkha> not phase, sometimes intensity 10:23 < muurkha> raytracing is geometrical optics, so it pretends that light is corpuscular 10:23 < fltrz_> right 10:25 < fltrz_> what country did you grow up in? 10:25 < muurkha> this body grew up in the US 10:26 < fltrz_> did you use Alonso & Finn for electromagnetism at some point? 10:28 < muurkha> not familiar 10:28 < muurkha> I did take a couple of physics classes in the US, including electromagnetism, but I don't remember what textbook we used 10:28 < muurkha> but the name doesn't ring a bell 10:29 < fltrz_> so you studied something other than physics? 10:29 < fltrz_> chemistry? math? 10:30 < muurkha> I study a lot of things, http://canonical.org/~kragen/dercuano 10:30 < muurkha> .t 10:30 < saxo> Dercuano 10:32 < muurkha> you don't really need to carry intensity information around usually but if you don't put in a recursion limit you'll sometimes hit an infinite loop 10:38 < fltrz_> I think we had an earlier discussion on Burrows Wheeler transform, with alternating left and right, but we didn't find out if it would be reversible or not 10:38 < muurkha> in Octave it might be simplest just to start with a 3×n bundle of rays, carry them all through a finite sequence of intersection/reflection/refraction tests (omitting Fresnel-equation reflections for analyzing a lens system at first) 10:38 < muurkha> yeah, no way to make it reversible has since occurred to me, but I haven't thought about the problem much 10:39 < fltrz_> neither have I, I just saw your page again, and my brain recalled that discussion 10:41 < fltrz_> random tangent, have you looked into hyperdimensional computing? 10:42 < muurkha> what's that? 10:42 < fltrz_> 10kbit vectors 10:43 < fltrz_> .g "hyperdimensional computing" 10:43 < saxo> https://techxplore.com/news/2021-05-brain-like-ai-hyperdimensional.html 10:43 < fltrz_> hmm 10:43 < muurkha> oh, like kanerva's fully distributed representation? 10:43 < fltrz_> yes kanerva 10:43 < fltrz_> also random indexing 10:44 < muurkha> I've never actually implemented kanerva's ideas, but I thought they were fascinating and promising when I encountered them last millennium 10:46 < fltrz_> theres a recentish paper by kanerva and others where they use it to detect error signals from EEG, they quantize the EEG data to *100 levels* and get state of art performance 10:47 < fltrz_> in conventional knowledge you want way more than ~7 bit ADC for EEG! 10:47 < fltrz_> they quantize to 7 bits! 10:47 < muurkha> neat 10:48 < muurkha> yeah, clearly the brain itself has to be tolerant of massive amounts of noise in decoding its own signals 10:48 < fltrz_> the EEG was noninvasively taken 10:48 < muurkha> right, I figured 10:48 < fltrz_> they did use the average electrode voltage as reference trick 10:49 < fltrz_> but this could be done with opamp summer in analog domain, and then use low bit depth ADC 10:49 < muurkha> yeah 10:50 < fltrz_> this should make 100+ electrode EEG systems dirt cheap 10:51 < muurkha> oh, interesting 10:54 < fltrz_> theres also a very interesting paper https://www.nature.com/articles/s41378-020-0173-z?error=cookies_not_supported&code=4931a321-ca5a-4a6b-80c3-b275213a49a7 10:55 < fltrz_> they place a micro permanent magnet on the accelerometer membrane to make a magnetic gradiometer 10:55 < fltrz_> this should allow portable low noise MEG 10:56 < fltrz_> the chosen accelerometers were big brand expensive though 10:56 < muurkha> heh, error=cookies_not_supported is always my favorite author 10:56 < fltrz_> I haven't put time in looking at the 20 cent (as opposed to $25) accelerometers and their noise figures yet 10:57 < fltrz_> straight copy paste from lynx / duck duck go result 10:57 < muurkha> even US$25 accelerometers are pretty noisy 10:58 < fltrz_> search "single point MEMS gradiometer accelerometer" if the link didn't work. its open access 10:58 < fltrz_> they were using analog accelerometers. I don't have experience with accelerometers 10:59 < muurkha> mine is indirect, we used them on our satellites 10:59 < fltrz_> is ubuntu canonical your employer? 11:00 < fltrz_> they launch sats? 11:00 < muurkha> no, they just wanted to use our domain name, we said no 11:01 < fltrz_> ah 11:34 -!- spaceangel [~spaceange@ip-89-176-181-220.net.upcbroadband.cz] has joined #hplusroadmap 11:39 < lsneff> https://cdn.aff.yokogawa.com/9/400/details/minimal-fab-one-stop-solution.pdf 13:02 < muurkha> .t 13:02 < saxo> Sorry, page isn't HTML 15:10 -!- spaceangel [~spaceange@ip-89-176-181-220.net.upcbroadband.cz] has quit [Remote host closed the connection] 15:51 -!- dustinm [~dustinm@static.38.6.217.95.clients.your-server.de] has quit [Quit: Leaving] 15:52 -!- dustinm [~dustinm@static.38.6.217.95.clients.your-server.de] has joined #hplusroadmap 17:11 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 260 seconds] 18:07 -!- hellleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has quit [Ping timeout: 264 seconds] 20:43 -!- Netsplit *.net <-> *.split quits: Codaraxis, streety 20:48 -!- Codaraxis [~Codaraxis@user/codaraxis] has joined #hplusroadmap 20:48 -!- streety [~streety@li761-24.members.linode.com] has joined #hplusroadmap --- Log closed Fri Nov 12 00:00:30 2021