--- Day changed Tue Jul 22 2008 | ||
-!- Splicer [n=p@h124n1c1o261.bredband.skanova.com] has quit [Read error: 104 (Connection reset by peer)] | 00:19 | |
-!- PanGoat [n=pan@ovid.sensoryresearch.net] has quit ["The computer fell asleep"] | 01:51 | |
-!- nsh [n=nsh@87-94-146-186.tampere.customers.dnainternet.fi] has quit [Read error: 110 (Connection timed out)] | 05:01 | |
-!- nsh [n=nsh@eduroam-80.uta.fi] has joined #hplusroadmap | 05:55 | |
kanzure | ybit: on the contrary | 09:43 |
---|---|---|
kanzure | nsh: I'd be happy to | 09:43 |
nsh | sweet | 09:44 |
kanzure | ybit: btw, I got what you wanted | 09:45 |
kanzure | please read http://gregegan.customer.netspace.net.au/DIASPORA/01/Orphanogenesis.html first | 09:45 |
* nsh was just been watching Fischell's TED talk on TMS, internal neuropacemaker technology for migraine [and other brain disease] treatment, and other stuff | 09:46 | |
nsh | commented on it in ##neuroscience | 09:46 |
kanzure | seems like something to avoid | 09:47 |
* kanzure found his quadcore | 09:47 | |
kanzure | and has four days to plan for Palo Alto | 09:47 |
kanzure | wtf | 09:47 |
kanzure | nsh: I like your forest fire analogy | 09:51 |
* nsh nods | 09:51 | |
nsh | that's how i'd guess the treatment works, the neurons go into a refractory period after overstimulation by the electrical signal and are unable to propagate the epilepsy/migrane | 09:52 |
kanzure | I'm guessing it's more like trying to cut it off by the balls/legs | 09:53 |
nsh | hmm | 09:53 |
kanzure | ybit: when you reawaken from death we can talk | 10:34 |
kanzure | the box turns on | 11:29 |
kanzure | but the video card still doesn't send anything to the monitor | 11:29 |
kanzure | so it's not POSTing | 11:29 |
kanzure | new mobo .. :( | 11:29 |
nsh | :-/ | 11:37 |
kanzure | http://heybryan.org/mac.html | 12:48 |
kanzure | is this an insult? | 12:48 |
ybit | tough to say | 12:55 |
nsh | somewhere between a complement and constructive criticism | 12:55 |
ybit | ^ agreed | 12:55 |
* ybit hates paperwork | 12:56 | |
* fenn snickers | 12:56 | |
kanzure | i r bending unit | 12:56 |
ybit | Hmm, I thought Bender's Game was supposed to be released this Summer | 12:57 |
kanzure | aww, sis just called because she's picking out her first laptop | 13:01 |
nsh | kanzure, explain mirror neurons | 13:04 |
kanzure | people tell me it's about social stuff | 13:04 |
kanzure | :) | 13:04 |
kanzure | something about simulating other people's social status for empathy generation | 13:04 |
nsh | apparantly they're neural firing patterns which are activated both when the animal (person) themself performs an action, and when they observe another performing the same action | 13:05 |
kanzure | do you know of DNA and RNA ladders? | 13:05 |
kanzure | we need something like this for discussing neural cultures | 13:06 |
nsh | yeah, they're used in gel electrophoresis as reference for molecular weights | 13:07 |
nsh | contain different mass [sets of] molecules that travel further down the gel under the force of the voltage | 13:08 |
nsh | so you get ¦¦¦ ¦ ¦ ¦ ¦ | 13:08 |
nsh | each line represents a certain weight (usually given in number of bases [nucleotides]) | 13:09 |
kanzure | correct | 13:10 |
kanzure | instead of qualitative description of neuronal behavior, or just description with just electrodes or something, why not just SEND the damn researchers the damn ladder/standard/reference? | 13:11 |
nsh | well, neural behaviour is a bit more complex than distanced travelled along a potential difference in a certain time... | 13:12 |
-!- nsh [n=nsh@eduroam-80.uta.fi] has quit ["mrr"] | 13:18 | |
kanzure | ybit: oh, plus there's brownie points since we're basically doing Egan's "orphanogenesis map" for him | 13:26 |
procto | kanzure: I think it's an invitation to meet some time in san franciso from aug 4 to the 8th | 13:31 |
procto | I have applied my considerable skills as a linguist | 13:31 |
procto | and that is my verdict | 13:31 |
kanzure | :) | 13:33 |
kanzure | indeed | 13:33 |
kanzure | wait a sec, voice sucks | 13:36 |
kanzure | where's my log? | 13:36 |
kanzure | I like how pasting links to http://heybryan.org/mac.html is self-referential to the tendency for me to post links | 13:38 |
-!- nsh [n=nsh@87-94-146-186.tampere.customers.dnainternet.fi] has joined #hplusroadmap | 13:40 | |
procto | nsh: so what's your beef specifically with yudkowsky? I'm curious | 13:40 |
procto | he's certainly bombastic, but I wouldn't say pesudointellectual | 13:40 |
kanzure | procto: just for some context, did you see his video? | 13:41 |
nsh | he wrote: http://www.overcomingbias.com/2008/07/anything-right.html | 13:41 |
procto | kanzure: which video? | 13:41 |
nsh | therefore he is an idiot, masquerading as a philosopher, pretending to be an artificial intelligence researcher | 13:41 |
kanzure | nsh: what's wrong with that? he's talking about "being grounded" | 13:43 |
procto | I don't understand the big problem with that article | 13:43 |
kanzure | yeah, that one looks ok | 13:43 |
procto | he's saying in a screenfull what could be said in a paragraph | 13:43 |
procto | and that's usually his problem | 13:43 |
kanzure | hardly | 13:43 |
procto | other than that, I disagree with him some of the time, and agree with him some of the time | 13:43 |
procto | I have not seen evidence to unmask him as some sort of phony | 13:43 |
kanzure | FAI? | 13:44 |
procto | you are against the idea of friendly ai? or against his approach to it? | 13:44 |
nsh | what's he saying? | 13:44 |
procto | idea I mean the necessity for a fundamental friendliness built into ai | 13:44 |
procto | I'm not trying to justify EY | 13:45 |
kanzure | his argument is typically "FAI or else UAI and we're all doomed" | 13:45 |
procto | I'm trying to understand your objections | 13:45 |
nsh | he's the philosophical equivalent of the machines in nineteen eighty-four that churn out music and story-books for the consumption of the proles | 13:45 |
procto | kanzure: UAI? | 13:45 |
kanzure | uFAI | 13:45 |
nsh | there is no original though, just churning of ideas recieved from others, undigested, and mechanically repermutated | 13:45 |
kanzure | i.e., non-F AI | 13:45 |
nsh | EVIL RABATS | 13:45 |
procto | kanzure: as long as it's not UFIA | 13:46 |
procto | :> | 13:46 |
* nsh smiles | 13:46 | |
kanzure | uhm, I mean UFAI | 13:46 |
kanzure | yes | 13:46 |
kanzure | wait | 13:46 |
kanzure | what? | 13:46 |
procto | kanzure: UFIA = unsolicited finger in ass | 13:46 |
procto | fark meme | 13:46 |
kanzure | oh, I thought you might mean unfriendly intelligence augmentation | 13:46 |
nsh | heh | 13:47 |
procto | nsh: I'm going to try and summarize what you said, and you can tell me if I'm right | 13:47 |
nsh | honestly, if you can read that article without feeling the slightest contempt, there is something wrong with your brain | 13:47 |
procto | EY's main output seems to be merely an aggregation, and of mediocre content at that | 13:48 |
procto | is that about what you're saying? | 13:48 |
* nsh is generalising from this one article | 13:48 | |
nsh | when i read or listen to someone i construct a model of how they think | 13:48 |
kanzure | nsh: let's not attack the guy himself though | 13:48 |
kanzure | let's focus on his arguments and plans | 13:48 |
kanzure | ah, well | 13:48 |
kanzure | if you're talking about his personal model of how he thinks | 13:49 |
procto | nsh has been going wild here with ad hominems | 13:49 |
nsh | right | 13:49 |
nsh | procto.. | 13:49 |
procto | but it seems the main point nsh is making is that he isn't original enough? | 13:49 |
nsh | never mind | 13:49 |
* nsh has done too much of this today | 13:49 | |
procto | I mean, that could be the case | 13:49 |
kanzure | procto: nsh might be getting the Eli-distaste from me | 13:49 |
procto | maybe there's something wrong with *my* brain, and I'm always interested in finding that out | 13:50 |
nsh | nah, i didn't even realise he was associated with the SIAI until after i made that comment here last night | 13:50 |
nsh | there is procto, believe me | 13:50 |
procto | well, I knew that, but the specific nature of the deficiency always eludes me :> | 13:50 |
nsh | an ad homimen argument is where you logical progression goes from <person has X characteristics> -> <their argument is faulty> | 13:50 |
nsh | my progression was the reverse | 13:50 |
nsh | the guy's an idiot, because what he says is ridiculous | 13:51 |
nsh | not the converse | 13:51 |
nsh | please, don't use the phrase ad hominem incorrectly | 13:51 |
procto | ok, perhaps that was indeed your internal path, though that was not evident to me from what meager buffer I've read. | 13:51 |
nsh | it's really annoying. | 13:51 |
procto | oh, I was using it quite correctly | 13:51 |
nsh | no. i made no argument. | 13:51 |
procto | I read you attacking him as a pseudointellectual followed by attacks on his points. | 13:51 |
nsh | it cannot be an argument ad hominem | 13:51 |
* nsh is not here to spoon-feed anyone | 13:52 | |
ybit | http://helen.pion.ac.uk/~thomas/microcircuits08/ | 13:52 |
nsh | you either get it, or you don't | 13:52 |
procto | not here to be spoon fed. so just to clear it up, the copious attacks on his person made it seem like an ad hominem when it may very well have not been an ad hominem | 13:52 |
kanzure | http://www.sl4.org/wiki/EliezerYudkowsky/Questions | 13:53 |
procto | nsh: in fact, I assumed it wasn't merely a random attack on him, but rather motivated by a dislike of his opinions so I decided to inquire. Not to undermine, but to understand. | 13:54 |
procto | because it seemed a very virulent dislike | 13:54 |
kanzure | http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer | 13:54 |
procto | kanzure: perusing | 13:55 |
* nsh smiles at procto | 13:55 | |
kanzure | just for the record, I don't have a formalization of my beef with his schemes | 13:55 |
kanzure | but | 13:55 |
kanzure | in general, the "FAI world dominator scenario" strikes me as peculiar | 13:55 |
kanzure | the idea of having to come up with a friendly ai to stop the other ai initatives from emerging | 13:56 |
kanzure | and from trying to rule the world with an iron fist to make the perfectly safe environment | 13:56 |
nsh | ok, if it helps, procto, imagine trying to explain why someone is ugly. aesthetic judgements are not meant to be explained logically, they're meant to be recognised | 13:57 |
nsh | (yes, you can, and should, make asethetic judgements on the way people think) | 13:57 |
nsh | *aesthetic | 13:57 |
nsh | the way he writes indicates that he thinks "uglily" (stupidly) | 13:58 |
nsh | one could dispute any particular idea, but it would be besides the point | 13:58 |
nsh | like saying how to fix a monster's left toe | 13:58 |
nsh | "People said all sorts of ridiculous things about AIs that they'd never dream of saying of themselves. It turned out that empathy wasn't good enough to understand AI, not even close. Even so, it can always be worse, and just making up ridiculous stuff at random without even empathic rationality-checks is indeed worse. " | 13:59 |
nsh | how can that not set off multiple warning bells?! | 13:59 |
kanzure | nsh: it's hard to verbalize the warning bells | 14:00 |
kanzure | for instance, what's "worse" there? | 14:00 |
procto | nsh: if I say something is ugly I can most certainly describe why I think so. someone may not hold the same values as me and I may say "he's ugly becomes his face is asymmetrical" whereas you value asymmetrical faces. so we can understand WHY our judgement our different. | 14:00 |
kanzure | and what makes us so sure that the paragraph wasn't made up at random? | 14:00 |
procto | this is much like discussing elegance in mathematics, linguistics, or computer science | 14:00 |
kanzure | have you ever looked at some code and gone "yuck" ? | 14:01 |
procto | it's a useful gestalt shortcut, but I can break it down when it's needed | 14:01 |
procto | of course I have, like I said, elegance | 14:01 |
kanzure | I have looked at many pieces of nonelegant code and not gone yuck :) | 14:01 |
procto | there are level of inelegance :> | 14:01 |
procto | it's not binary | 14:01 |
procto | levels* | 14:01 |
procto | yuck is incredibly ingelegant, i.e. ugly | 14:01 |
procto | so certainly I understand nsh's sentiments | 14:02 |
procto | it's the breakdown I'm interested in | 14:02 |
procto | so I can understand what values are different that lead us to these different holistic judgements | 14:03 |
procto | I'm not being pednatic, just curious | 14:03 |
procto | if this decomposition is too much of a hassle, so be it | 14:03 |
procto | nsh: I don't understand what the paragraph means. that can be a deficiency of the author, or lack of context. if the former, then I can sort of understand your point. | 14:04 |
procto | EY's prose leans almost towards post-modernist directions at times, though I'm sure that isn't his intention | 14:04 |
nsh | ( context: http://www.sl4.org/wiki/EliezerYudkowsky/Questions ) | 14:04 |
procto | ah, right | 14:05 |
nsh | basically, this chris guy is calling EY on crap that he wrote, and EY is circling around how stupid he was by after-the-fact contextualisation | 14:06 |
nsh | c.f. "the writing in CFAI only makes sense in contrast..."; and "The FAI I visualized back in the CFAI era was a lot more analogous..." | 14:07 |
nsh | i don't wanna be a hater, i've seen this kind of layering of bullshit over the cracks in bullshit so many times | 14:08 |
nsh | it's just second nature for me to call it | 14:08 |
procto | nsh: actually, he seemed to be asking for clarifications. I understood EY's original sentences more than I did his response, and the paragraph you posted is in particular incogruous | 14:08 |
nsh | his intention is to ask for clarification | 14:08 |
nsh | his result is to call on crap :-) | 14:08 |
nsh | what people do and what they intend to do are often at odds | 14:08 |
procto | well, I seem to have a understood what you are calling crap | 14:08 |
nsh | but again, the individual points aren't really relevant | 14:09 |
nsh | it's syndromic | 14:09 |
procto | so something there must be the symptom of our value divergence | 14:09 |
nsh | attending to the symptoms blinds one to the underlying condition | 14:09 |
kanzure | nsh: the gap between what Eli alludes to intending to do versus what he actually does, for instance, in consideration of the paragraph that you pasted, perhaps? | 14:09 |
nsh | kanzure, can you ellaborate, sorry? | 14:10 |
kanzure | nevermind | 14:10 |
kanzure | I'm just trying to be a little constructive here | 14:10 |
procto | that sentence that chris is asking about means : "we try to make our decisions rational via some system X. However, we don't really apply system X to the decisions of others, even if sometimes we think we do, or if sometimes we think we shouldn't for the benefit of that other person" | 14:11 |
procto | nsh: the condition is the value divergence. there is something that you value and I don't, and vice versa. the symptom of that is an instance of that divergence. | 14:11 |
nsh | (my mention of symptoms was unrelated to yours; hadn't read that line when i typed the word) | 14:12 |
procto | once I understand the divergent values I can 1) think about my own values in that regard 2) better understand how what I may communicate to you be understood | 14:12 |
procto | oh, ok | 14:12 |
procto | :) | 14:12 |
* nsh smiles | 14:12 | |
nsh | invoking value-divergence is a way of circumventing the matter of whether or not we should actually be able to come simply to an agreement over whether or not he is a kook | 14:13 |
kanzure | ". I would estimate that any given AI project is more likely to wipe out humanity through failure of Friendly AI" | 14:13 |
nsh | *make_sentence_better | 14:13 |
procto | nsh: no no, definitely not. that's what point #1 is for. I can look at the values I hold that make me think he's not a kook (whatever deficiencies he does have) | 14:16 |
procto | nsh: and perhaps agree with you | 14:16 |
procto | I am not as much of a relativist as, say, Arnia | 14:16 |
procto | but I tend to lean a bit in that direction | 14:16 |
* kanzure is completely confused by procto | 14:18 | |
procto | http://logarchy.org/p/science.jpg | 14:19 |
kanzure | http://heybryan.org/camera/internet/fos.jpg | 14:21 |
kanzure | kidding of course | 14:21 |
kanzure | but turning the conversation around | 14:21 |
kanzure | perhaps I will not punch Eli for nsh, | 14:22 |
kanzure | procto, have you read Eli's ideas before? | 14:22 |
procto | yes, I have | 14:22 |
kanzure | do you also believe (as he does) that we are doomed if we don't do FAI? | 14:23 |
procto | not quite. | 14:24 |
procto | I think there is a certain probability of danger that is not negligeable, and the potential damage is so high, fai is a good insurance policy | 14:25 |
procto | his approach to fai may not be quite what I'd pick however | 14:25 |
kanzure | fai is based on a black swan though | 14:26 |
procto | I'm not familiar enough his specific fai theoretical underpinnings however. but from what I've seen expressed by him in a more popular manner regarding those, I don't think it is best | 14:26 |
kanzure | how is that a good insurance policy? | 14:26 |
kanzure | bsically his fai idea is to "find the theoretical basis of friendliness and make this an intrinsic component of the system, i.e. if it doesn't work, it's dead" | 14:26 |
kanzure | which is an awesome idea | 14:26 |
kanzure | but a blackswan. | 14:26 |
nsh | procto, i don't think that approach will work | 14:27 |
nsh | it's like getting a joke | 14:27 |
procto | risk = probability * loss, roughly | 14:27 |
nsh | if you don't get it, you can't look to what values you have that make you don't get it | 14:27 |
procto | nsh: of course you can! I always do! | 14:27 |
procto | and somtimes I learn to like them | 14:27 |
procto | like chinese jokes | 14:27 |
procto | I used not to be able to get them | 14:27 |
procto | until I immersed myself more in the culture | 14:28 |
nsh | right | 14:28 |
procto | and now I can get them much better | 14:28 |
nsh | but introspection won't get you far | 14:28 |
procto | basically, I aligned my values | 14:28 |
nsh | ok, i see what you're saying | 14:28 |
procto | certainly there are limits, but I apply introspection | 14:28 |
procto | very very heavily | 14:28 |
kanzure | "risk = probability * loss" | 14:29 |
kanzure | see, that doesn't make any sense to me | 14:29 |
procto | kanzure: so the probability is low, but imo not negligeable. I can't really quantify it due to its very nature. loss can be near total | 14:29 |
kanzure | you'll have to prove probability to me first | 14:29 |
kanzure | the only system that efficiently actualizes the universe is the universe itself | 14:30 |
kanzure | i.e. the process of physics | 14:30 |
kanzure | this is why we can't predict the future except by creating it | 14:30 |
procto | kanzure: part of my training and expertise is information security | 14:30 |
kanzure | you sir, should learn your thermodynamics | 14:30 |
procto | one of my motivations was specifically to deal with various hostilities in a post-human world | 14:30 |
kanzure | interesting | 14:30 |
procto | we use a rough approximation of that equation to figure out the way in which we protect information assets | 14:30 |
kanzure | probability means what though | 14:30 |
procto | security is perhaps a misnomer. survivability is being applied more often now. | 14:31 |
kanzure | I'd like to cite Peirce's approach to probability, but unfortunately it's not really written up on the internet | 14:31 |
procto | back to my point | 14:31 |
procto | the point is that as potential losses increase, you care less about how big or small the probability is | 14:31 |
kanzure | basically he argued that the only thing that matters is the actual physical process, so he'd sit down and record data from something and record those numbers | 14:31 |
kanzure | sure | 14:31 |
nsh | procto, what argument is what is applied by the people who want the LHC shut down | 14:31 |
nsh | *that | 14:32 |
nsh | i would be wary of joining their ranks | 14:32 |
procto | certainly | 14:32 |
procto | you care less about probability but not at all | 14:32 |
procto | not not at all* | 14:32 |
procto | hehe | 14:32 |
procto | as for the LHC, well, there are plenty of natural LHC's occuring around the universe | 14:32 |
procto | there aren't superhuman AIs we can point to | 14:33 |
procto | if we theoretical underpinnings for AIs even remotely approaching those we have for the LHC | 14:33 |
procto | had* | 14:33 |
kanzure | what? | 14:34 |
kanzure | Sorry, I just lost the context | 14:34 |
procto | it'd be much easier to estimate risk | 14:34 |
kanzure | what the hell is risk? | 14:34 |
nsh | there is no risk because the question is meaningless | 14:34 |
procto | that "equation" is a rule of thumb | 14:34 |
nsh | the friendly/unfriendly distinction offers no value | 14:34 |
procto | it's used to allocate resources | 14:34 |
nsh | until we have some idea of what an AI would be like | 14:34 |
nsh | it's a pointless distraction | 14:34 |
procto | where risk is greater, more resources are allocated | 14:34 |
nsh | (like most pseudophilosophy) | 14:35 |
* kanzure still doesn't know what risk is | 14:35 | |
* nsh back in a bit | 14:35 | |
kanzure | "risk is ... expos[ure] to a cahnce or loss or damage" | 14:35 |
kanzure | exposure to a chance?? | 14:35 |
kanzure | so when you're within 50 feet of a "probability" ? | 14:35 |
kanzure | exposure theory of probability | 14:35 |
kanzure | oh boy :) | 14:35 |
procto | risk is the uncertainty of a negative impact. it should be used as a relative measure. that is, one shouldn't be driving absolute measures of risk. one could, but I don't they would be meaningful. | 14:37 |
procto | so, a quick example: | 14:37 |
kanzure | uncertainty of a negative impact? so, basically, a black swan | 14:37 |
kanzure | so let's just not use blackswans in our projects | 14:37 |
procto | organization has 2 servers, there are X resources one can dedicate to protect them. server A has data more valuable than B. hence, a bigger fraction of X is spent to protect it. | 14:37 |
procto | it's a relative uncertainty | 14:38 |
procto | that is, server A is a bigger risk than server B | 14:39 |
procto | assuming probability of either being attacked is the same | 14:39 |
procto | one would evaluate the relative risk only while allocating resources | 14:39 |
procto | such as resources for research FAI versus other things | 14:39 |
procto | I don't think UFAI is as much of a catastrophic risk as EY does | 14:40 |
kanzure | I still don't understand risk. if you're running out of resources in this god damn galaxy then you have bigger issues to deal with | 14:40 |
procto | so I don't think we should devote as many resources as he wants | 14:40 |
procto | you are assuming infinite resources. this may be true in the long term. but in the short term, one has limited resources. man-power, dollars in the bank, factories, math phds | 14:41 |
kanzure | I am not assuming infinite resources | 14:41 |
procto | all scarce resources | 14:41 |
kanzure | I am assuming that we're going to recognize the context in which these considerations are taking place ... | 14:41 |
kanzure | you have to admit that risk is just your quickfix to the bigger problem | 14:41 |
procto | risk is used in a particular type of resrouce distribution problem | 14:41 |
kanzure | it's like a bandaid that doesn't even cover up the serious problems | 14:41 |
kanzure | resource distribution? why not just focus on the rate of acquisition of materials instead? then you can distribute it effectively as a growth function of some sort | 14:42 |
procto | the serious problem? you mean the inability to accurately calculate probability for certain events? | 14:42 |
procto | then yes | 14:42 |
kanzure | what? | 14:42 |
kanzure | the serious problem of gathering resources | 14:42 |
kanzure | heh' | 14:42 |
kanzure | i.e., you'll argue "but we can't devote resources to space travel!" | 14:42 |
kanzure | holy fuck that's a bad situation then, no? | 14:42 |
procto | to tackle the problem of gathering resources you need to allocate it resources | 14:42 |
kanzure | sure | 14:42 |
procto | chicken, eggs, loops, catch 22's, etc. | 14:42 |
procto | at any specific point in time, one will always have a limited amount of deployable resources | 14:43 |
procto | these includes human as well as physical resources | 14:43 |
kanzure | nobody's arguing that | 14:43 |
kanzure | it's why we have caches | 14:43 |
kanzure | *arguing against that | 14:43 |
procto | risk is a particular factor (only a factor) in a heuristic for determining beneficial distribution in a particular class of resource distribution problems. | 14:44 |
procto | does that clarify? | 14:44 |
kanzure | I just don't know why you would prefer to be uncertain (risk = "the uncertainty of a negative impact") | 14:45 |
kanzure | just avoid that in the design | 14:45 |
kanzure | I don't see what the problem is. | 14:45 |
kanzure | i.e., risk that you will be hit by a drunk driver -- get rid of the drunk, the car, or don't go outside (there are a few other options) | 14:46 |
kanzure | (by "the car" I mean, how about a system that actually, you know, doesn't dependent on a drunk driver) | 14:47 |
procto | you wouldn't prefer to be uncertain, but uncertainty is a fundamental fact of the universe | 14:47 |
kanzure | it's a factor of your design ... you either know your system can work or you know it won't, and if you don't know if it will work then you can always test it | 14:48 |
kanzure | well, usually you can test it | 14:48 |
procto | so, one can design systems with a different level of uncertainty | 14:48 |
procto | with better probabilities | 14:48 |
procto | and lower losses | 14:48 |
procto | that's one way to reduce risk | 14:48 |
procto | and you apply it to problems which you deem are high risk :) | 14:48 |
kanzure | ? | 14:48 |
kanzure | uh? | 14:48 |
kanzure | so give me an example | 14:48 |
kanzure | where 'risk' is a useful determinant | 14:48 |
kanzure | or a useful tool I mean | 14:48 |
procto | other than the server example I gave? | 14:49 |
kanzure | which I already solved | 14:49 |
kanzure | you shouldn't really be worrying about the server if you're running out of resources in this galaxy | 14:49 |
procto | I must have missed that | 14:49 |
kanzure | no, I mentioned it, then went on to the bandaid stuf | 14:49 |
kanzure | *stuff | 14:49 |
kanzure | etc. | 14:49 |
procto | yes, but how does that apply? I don't understand. let's say you have 7 sys admins | 14:50 |
procto | those are your resources | 14:50 |
procto | how would you allocate them between server A and B? | 14:50 |
kanzure | why do I have 7 system admins when I need programmers ? | 14:50 |
kanzure | it's a computer, most of these tasks can be automated | 14:50 |
procto | you're rewinding back time. | 14:51 |
kanzure | the same with a server farm -- just nobody has put a server farm "in a box" yet I guess | 14:51 |
kanzure | what? | 14:51 |
procto | at each of those points, you would still be faced with a resource allocation problem. | 14:51 |
procto | yes, ideally, we can look at any system and completely overhaul it | 14:51 |
procto | but... we can't. | 14:51 |
kanzure | you can't | 14:51 |
procto | we can do it so some systems. | 14:51 |
kanzure | but I bet I can | 14:51 |
kanzure | or at least find people who can help me heh' | 14:51 |
kanzure | (etc.) | 14:51 |
kanzure | (I'm not really that conceited) | 14:51 |
procto | I think you misunderstand. I do not say it isn't impssible in theory. Or even impossible in practice. I mean that it is not something that you can do under those conditions. | 14:52 |
procto | I'll explain | 14:52 |
kanzure | your conditions are significantly more restricting than reality | 14:52 |
kanzure | in fact I think you're assuming, within your scenario, that there might have even been past decisions that I have made that have made me stupid and I for some reason have 7 sys admins when they aren't going to be all that useful | 14:53 |
kanzure | I mean, I can assume that sys admins could be used to solve those problems or whatever, but I don't know why I need to use resources to maintain the servers themselves when they are mostly self contained systems anyway | 14:53 |
kanzure | for the tasks that need to be dealt with. | 14:53 |
procto | you are doing the SATs. The question is if a train left station A at 4pm at 50mph and a train left station B at 8 pm at 80mph, train from station B stops X miles short of station A. What speed should it go to prevent colliding with train from A? | 14:54 |
procto | and your answer would be: should've designed a better automated train routing system. | 14:54 |
procto | yes you are correct | 14:54 |
procto | but at that point in time, that is not the case. | 14:54 |
kanzure | why should I be responsible for your stupid system though ? | 14:54 |
procto | you certainly shouldn't be | 14:54 |
kanzure | heh | 14:54 |
kanzure | so then what's the problem? | 14:55 |
procto | and as I said, ideally it can be redesigned | 14:55 |
kanzure | but it can be | 14:55 |
kanzure | it's not just ideally | 14:55 |
procto | in my example, "you" are an infosec consultant. you can tell the company who already did something stupid in their design that they need to rebuild it, and they will fire your ass, and you're lucky if you even get a visit fee. | 14:55 |
procto | or you can put a "bandaid" | 14:55 |
kanzure | okay? | 14:55 |
procto | it will be the optimal band-aid. | 14:55 |
kanzure | wonder why I got a job there | 14:56 |
procto | well, you'd notice that I'm not currently employed as an infosec consultant for that reason :> I don't like bandaids | 14:56 |
procto | however, it is very very very hard to do much else but more complex and sophisticated band aids | 14:56 |
kanzure | hm, infosec is real | 14:56 |
procto | the alternative is a revolution | 14:57 |
procto | and those are usually pretty bloody | 14:57 |
kanzure | you want to kill me because I don't think your 'risk' tool is useful? | 14:58 |
kanzure | or at least make me bleed | 14:58 |
procto | no, I mean that if your goal is revolution you may make people bleed even if that's not your intention | 14:58 |
procto | I am interested in bandaids that don't cover your skin, but integrate and improve | 14:59 |
procto | until one day you are nothing but a sum of your "bandaids" | 14:59 |
procto | to be metaphorical about it | 14:59 |
procto | one can change a bad design iteratively | 15:00 |
procto | on the fly auto-refactoring | 15:00 |
procto | (that's auto=self, not necessarily auto=automatic) | 15:00 |
procto | we started this line of questioning with "risk" | 15:00 |
kanzure | ah, well, | 15:01 |
procto | risk is a concept I would use to to evaluate where I should be putting more bandaids and how expensive they should be | 15:01 |
procto | I'm using your terms so perhaps it's clearer, because I wasn't clear enough before | 15:01 |
kanzure | I see that you might be implying the use of on-the-fly risk assessment by various agents that are doing stuff for whatever reason | 15:01 |
kanzure | and I agree with you that this makes it a somewhat optimizing system over time peraps | 15:01 |
kanzure | *perhaps | 15:01 |
kanzure | but whatever. :) | 15:01 |
procto | risk is a very fuzzy concept | 15:02 |
kanzure | for something so important? | 15:02 |
kanzure | :( | 15:02 |
procto | yes | 15:02 |
procto | there are fields of science that are young and and uncertain | 15:02 |
procto | systems theory, which this is part of, is barely even a theory | 15:02 |
procto | cybernetics is fun and clever, but also very incomplete, and most definitely not sexy enough these days for enough resources to be devoted | 15:03 |
procto | risk isn't always the hueristic for resource allocation. I find that sexiness is a much more commonly used one :) | 15:03 |
kanzure | I am not interested in a quick fix. | 15:05 |
kanzure | but I am glad that you can understand the analogy here between quickfix/bandaid and something more fundamental | 15:06 |
kanzure | so within this context I suppose I can understand risk as a bandaid tool, sure | 15:07 |
kanzure | but I guess it all just has more importance that demands, from me, something more than a quickfix | 15:07 |
procto | your choice of terms indicates your negative attitute towards them, which is why there was a slight delay before I was able to use them without slight cognitive dissonance :> (introspection helps once again) | 15:14 |
kanzure | now then | 15:16 |
procto | so now we can see the value divergence, right? | 15:16 |
kanzure | back to information security for a sec | 15:16 |
procto | kk | 15:16 |
kanzure | there are many that say that 'security' is impossible given the second law of thermodynamics | 15:16 |
kanzure | except perhaps one time pads | 15:16 |
kanzure | but that's kind of in the name -- assuming it's truly a one time pad | 15:16 |
kanzure | and even then ;-) | 15:16 |
procto | I would tell you that security is impossible given anecdotal history | 15:16 |
kanzure | hah | 15:17 |
kanzure | :) | 15:17 |
procto | it is a maxim in infosec that 'no system is secure against a sufficiently determined attacker' | 15:17 |
procto | as i said, the paradigm these days is shiting towards survivability | 15:17 |
procto | which is what the better infoseccers were doing all along | 15:17 |
procto | just calling it something else | 15:17 |
procto | it's a mix of prevention, detection, and recovery | 15:18 |
kanzure | is it bad that one of the results for googling 'infosec' is 'infowar' | 15:18 |
kanzure | ah, good | 15:18 |
kanzure | so it's more holistic than just | 15:18 |
kanzure | "well, let's hope it holds" | 15:18 |
procto | most certainly | 15:18 |
kanzure | "let's hope it holds" is not a good strategy IMHO | 15:18 |
procto | the big vendors would have you believe that a big enough firewall is what you need | 15:18 |
procto | but well, that's just like building a very big very thick wall | 15:18 |
kanzure | case and point | 15:18 |
kanzure | China | 15:18 |
procto | and believing that's absolute security | 15:19 |
kanzure | again, China | 15:19 |
procto | I'm as proficienct in methods of physical access as the more incorporeal kind | 15:19 |
procto | because the best software security policy doesn't help you when a guy dressed as a janitor walks into your hq with a usb key in his pocket | 15:20 |
kanzure | or a screw driver | 15:21 |
* fenn grumbles about so-called "information security" really being "secrecy" | 15:21 | |
kanzure | multiply redundant drives, remember? | 15:21 |
procto | fenn: are you refering to security by obscurity? | 15:22 |
procto | or a general distatste for preventing information from being free? | 15:23 |
procto | because security by obscurity doesnt work, and I certainly believe that information should be free | 15:23 |
procto | but only insofar as it improves the balance of power-knowledge | 15:23 |
procto | once there is a homogenous distribution, not necessarily totally equal, true freedom of information can be enacted | 15:24 |
procto | (defining my terms: http://en.wikipedia.org/wiki/Power-knowledge) | 15:24 |
fenn | general distaste for property and especially intellectual property | 15:25 |
kanzure | methinks 'property' is an equally ridiculous bandaid | 15:25 |
kanzure | no, perhaps even more of one | 15:25 |
procto | well, property is born out of scarcity. since as far as I understand it your goal is to eliminate scarcity, you would certainly be against property | 15:29 |
kanzure | fenn: I wonder if my bandaid approach is a good approach to talking about Eli's stuff | 15:29 |
kanzure | not quite the goal really | 15:29 |
kanzure | but your reasoning is correct | 15:29 |
procto | an ancillary effect :) | 15:29 |
kanzure | I should have written out what it was that I was talking with ybit about | 15:29 |
kanzure | hope he remembers | 15:29 |
procto | I would like to see the elimination of scracity, in which case property would not be necessary | 15:29 |
procto | but as long as there is a scarcity of a resource, I support property rights as far as that resource is concerned | 15:30 |
kanzure | what the hell are property rights? | 15:30 |
procto | for things subsumed with IP scarcity is very hard to define | 15:30 |
procto | property is a bandaid against violence. it's what even little children design to prevent *constant* fighting over the best toys. | 15:31 |
procto | there can be multiple systems | 15:31 |
procto | but as I said, it's a bandaid that results from scarcity | 15:31 |
procto | IP isn't valid because most acorporeal things are not scarce | 15:31 |
procto | property "rights" are whatever system one uses to allocate scarce resources. not refering to one particular system. | 15:32 |
procto | I was just saying that one should be used in the case of scarcity | 15:32 |
kanzure | fenn: how about a constraints markup language for the identification of avenues of research? | 15:36 |
kanzure | this is basically what I was talking with ybit about | 15:36 |
kanzure | this way we can formalize the "possibility space" that might be interesting to explore | 15:36 |
kanzure | and at the same time make Google Moupse or Google Earth/brain an interface to generating these constraints-files given the tagging that individuals do (just make it very simple for them to make structured data sets in the backend) | 15:37 |
kanzure | for instance "a mutation involving genes <list> might be of interest to <tag>" but this isn't a good way to put it ... "might be of interest to <tag>" is not formal | 15:37 |
kanzure | perhaps "might be of relevance to the well-defined experimental model <blah> from <bibtex>" | 15:38 |
fenn | why not just call it "google mouse" | 15:47 |
fenn | moupse looks like a typo (and probably was at some point) | 15:48 |
kanzure | dunno, it's what the page calls itself | 15:48 |
kanzure | google mouse is ok with me though | 15:48 |
kanzure | "if you give a mouse a cookie .." | 15:48 |
fenn | "if a packet hits a pocket on a socket on a port" | 15:49 |
fenn | "You can't say this? What a shame sir! We'll find you, Another game sir." | 15:50 |
kanzure | is it bad if I recalled the rest of the rhyme after that ? | 15:53 |
kanzure | "And the microcode instructions cause unnecessary risc," | 15:54 |
kanzure | bah, since when is RISC unnecessary | 15:55 |
kanzure | :) | 15:55 |
* kanzure wonders what's up with giving Google free publicity here | 15:55 | |
procto | kanzure: re:previous discussion. I think of FAI as a sort of "security by design" approach, much like you wanted to do with the servers. is EY's way the best way to be secure via a fundamental design characteristic? not necessarily, but it's one such approach, and I see few viable contenders in that area. | 15:57 |
procto | but I must be afk now for a bit | 15:57 |
kanzure | don't go :) | 15:57 |
kanzure | that's secure by totalitarian rule | 15:57 |
kanzure | or I mean, that's what it's saying | 15:57 |
kanzure | it's definitely not the truth | 15:57 |
kanzure | the problem is that we worry people will die, no? and other things as well | 15:58 |
kanzure | clearly the solution isn't "make everything that is created from now on, intrinsically friendly!" | 15:58 |
kanzure | perhaps instead we should focus on making some backups and sending them off to the stars and other locations | 15:59 |
kanzure | "but what about Vinge's ai that will run after you and hunt you down?" meh, guess all of life is hopeless then. boo hoo. let's not even try to live. | 15:59 |
kanzure | a malevolent being trying to destroy the galaxies ... where's Flash Gordon when you need him? | 16:20 |
-!- jm|earth [n=jm@p57B9C042.dip.t-dialin.net] has joined #hplusroadmap | 16:41 | |
procto | kanzure: maybe I misunderstand what he means by Friendly, or what you mean by Friendly | 17:02 |
procto | as for malevolant AIs, well, since we cannot model a superhuman's AIs thoughts, those cannot be crossed out | 17:02 |
procto | there's probably a bias in fiction authors' overwhelming preference for horrible catastrophes that must be surmounted | 17:03 |
procto | that's a bias in the people who prefer to read such stories | 17:03 |
procto | but the set of potential outcomes is non-countable | 17:04 |
-!- ybit is now known as flash_gordon | 17:06 | |
flash_gordon | did someone call? | 17:06 |
-!- flash_gordon is now known as ybit | 17:06 | |
kanzure | whether or not a malevolent ai emerges should not be your main concern | 17:10 |
kanzure | survivability, regardless of malevolent force, is the idea .. | 17:10 |
kanzure | no? | 17:10 |
ybit | so what do you plan to speak about at biobarcamp? | 17:11 |
kanzure | I'm worrying that my answer to that is "everything" | 17:12 |
ybit | hehe | 17:12 |
ybit | it's all interesting, and you can't really give too much of a detailed speech in a limited time, so choosing to speak about everything may be the best choice | 17:15 |
kanzure | not really, that's too self-centered | 17:15 |
kanzure | so I'm probably going to talk about skdb and how people have been lazy and not contributing to the git repo | 17:16 |
kanzure | (maybe insulting them isn't the best way to go about things :-) | 17:16 |
ybit | :P | 17:16 |
kanzure | http://www.visitcalifornia.com/state/tourism/tour_inc_navigation.jsp?BV_SessionID=@@@@1723516698.1046124524@@@@&BV_EngineID=gadcgicfhilhbemgcfkmchcog.0&PrimaryCat=Regions&SecondCat=San+Francisco+Bay+Area | 17:17 |
kanzure | what the hell is a License Exception? | 17:17 |
kanzure | ERROR: you are not licensed to visit SF ? | 17:17 |
ybit | grr, javascript | 17:18 |
kanzure | "ERROR: gaydar detecting a straight guy!" | 17:18 |
ybit | lol | 17:18 |
kanzure | look at that nasty URL | 17:18 |
kanzure | BV_EngineID=gadcgicfhilhbemgcfkmchcog.0 | 17:18 |
kanzure | what type of variable is that.. | 17:18 |
kanzure | http://www.city.palo-alto.ca.us/info/default.asp woah, this guy looks serious | 17:23 |
kanzure | double handed mouse | 17:23 |
kanzure | heh | 17:23 |
-!- nsh- [n=nsh@87-94-146-186.tampere.customers.dnainternet.fi] has joined #hplusroadmap | 17:28 | |
* kanzure is done webhunting for phone numbers | 17:31 | |
kanzure | so presentable material for biobarcamp | 17:31 |
kanzure | hm | 17:31 |
-!- nsh [n=nsh@87-94-146-186.tampere.customers.dnainternet.fi] has quit [Nick collision from services.] | 17:31 | |
-!- nsh- is now known as nsh | 17:31 | |
kanzure | really I'm just good at link dumping | 17:31 |
kanzure | these websites always have better tutorials than I can give off the top of my head (well. to varying extents) | 17:32 |
kanzure | I was trying this: http://heybryan.org/ia.html | 17:32 |
kanzure | but I'm not too sure about it since, again, it's mostly just links | 17:33 |
kanzure | you know Google's crappy gchat interface with popups and so on? it would be interesting to implement this across my webserver on all HTTP HTML output (as a filter) so that I could send messages to users browsing my content | 17:35 |
kanzure | I fetch a very significant number of people in my tar pits | 17:36 |
kanzure | the http://heybryan.org/~bbishop/search/ stuff | 17:36 |
kanzure | gahhhh | 19:02 |
kanzure | thermal cooling PASTE | 19:02 |
kanzure | rawr | 19:02 |
* kanzure tears up the dorm looking for some paste :) | 19:02 | |
kanzure | the cpu is running at 174 degrees Fahrenheit | 19:15 |
kanzure | my laptop shuts down at 43 C. monolith is climbing to 80 C .... | 19:16 |
kanzure | oh boy. | 19:16 |
-!- Netsplit zelazny.freenode.net <-> irc.freenode.net quits: nsh, kanzure_, jm|earth, freer, procto, ybit | 19:17 | |
-!- Netsplit over, joins: nsh, jm|earth, ybit, procto, kanzure_, freer | 19:17 | |
kanzure | " Senator John McCain of Arizona, the presumptive Republican nominee for president, has proposed that the government offer $300 million to whoever invents a battery compact enough, powerful enough and cheap enough to replace fossil fuels.H" | 21:55 |
kanzure | delicious | 21:55 |
kanzure | http://www.innocentive.com/ | 21:56 |
kanzure | "innovation management" | 21:56 |
kanzure | this is totally bullshit | 21:56 |
* kanzure grabs all of the challenges | 22:12 | |
kanzure | http://gw.innocentive.com/ar/discipline/index?offset=0&max=1000&viewMode=abstract&categoryName=Chemistry&challenge-search-text=&subCategoryName=All&challenge-sort-by=challengeNumber&challenge-order-by=desc | 22:12 |
kanzure | "long ago Sun set us up as being in competition with Keith Henson's group to port NeWS to the Mac ... if we'd been introduced as potential collaborators instead we may well have got it delivered soon enough and well enough that NeWS would have displaced X and techincal history would look a lot different" | 22:16 |
kanzure | http://en.wikipedia.org/wiki/NeWS | 22:16 |
kanzure | http://en.wikipedia.org/wiki/NeWS | 22:17 |
kanzure | oops | 22:17 |
kanzure | evidently Keith Henson and Tony were competing on that one ... | 22:17 |
kanzure | porting it to the Mac or some such. | 22:17 |
kanzure | also: | 22:18 |
kanzure | http://heybryan.org/challenges.html | 22:18 |
kanzure | because I think the innocentive.com website is bullshit. there's an entire output of their database | 22:18 |
kanzure | I wonder if NIH/DARPA would be interested in a semantic 'grant' framework | 22:54 |
ybit | i would :) | 23:05 |
ybit | especially when in the coming year | 23:05 |
ybit | -when | 23:06 |
ybit | grantgopher.com just doesn't cut it | 23:07 |
kanzure | Thomas De Marse | 23:44 |
kanzure | aha | 23:44 |
kanzure | he's the guy that does the "brain chips" that fly virtual planes | 23:45 |
ybit | sooo... am i wasting my time teaching myself 30 minutes everyday? | 23:55 |
ybit | +spanish | 23:55 |
kanzure | teaching yourself what? | 23:55 |
kanzure | Spanish? | 23:55 |
ybit | indeed | 23:55 |
kanzure | yes | 23:55 |
kanzure | just go into some spanish channels | 23:55 |
ybit | heh, why? | 23:55 |
kanzure | well, if your attitude is 'teach' yeah | 23:56 |
ybit | learning* then | 23:56 |
ybit | i'm already annoying those in #wikipedia-es :) | 23:57 |
ybit | the topic isn't specific, so i can do this | 23:57 |
ybit | unlike #python-es or #gentoo-es | 23:58 |
Generated by irclog2html.py 2.15.0.dev0 by Marius Gedminas - find it at mg.pov.lt!