--- Log opened Sun Jul 10 10:49:36 2011 | ||
-!- gnusha [~gnusha@131.252.130.248] has joined ##hplusroadmap | 10:49 | |
-!- Topic for ##hplusroadmap: http://gnusha.org/skdb/ http://groups.google.com/group/diybio http://bit.ly/diybionews http://gitduino.com/ http://gadaprize.org/ | logs: http://gnusha.org/logs/ | 10:49 | |
-!- Topic set by kanzure [~kanzure@131.252.130.248] [Thu Jan 20 10:44:20 2011] | 10:49 | |
[Users ##hplusroadmap] | 10:49 | |
[ AlonzoTG] [ devrandom ] [ Fiohnel ] [ jrayhawk ] [ saurik ] | 10:49 | |
[ archels ] [ drazak ] [ flamoot ] [ kanzure ] [ seanstickle] | 10:49 | |
[ augur ] [ elmom ] [ foucist ] [ mjr ] [ streety ] | 10:49 | |
[ bkero ] [ epitron ] [ gnusha ] [ nchaimov ] [ superkuh ] | 10:49 | |
[ CapNemo ] [ eridu ] [ Helleshin ] [ nuba ] [ uniqanomaly] | 10:49 | |
[ CIA-18 ] [ fenn ] [ JaredWigmore] [ pasky ] [ Utopiah ] | 10:49 | |
[ dbolser ] [ ferrouswheel] [ jmil ] [ PixelScum] [ ybit ] | 10:49 | |
-!- Irssi: ##hplusroadmap: Total of 35 nicks [0 ops, 0 halfops, 0 voices, 35 normal] | 10:49 | |
-!- Channel ##hplusroadmap created Thu Feb 25 23:40:30 2010 | 10:49 | |
-!- Irssi: Join to ##hplusroadmap was synced in 5 secs | 10:49 | |
-!- Fiohnel [~r3idslash@111.94.200.53] has quit [Read error: Connection reset by peer] | 10:52 | |
-!- Fiohnel [~r3idslash@111.94.200.53] has joined ##hplusroadmap | 10:53 | |
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has joined ##hplusroadmap | 12:25 | |
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has quit [Quit: Page closed] | 12:50 | |
ybit | what's the process of blocking friends who spam on google+ like they do everywhere else? | 13:02 |
---|---|---|
ybit | i'd hate to block them from messaging me because they are friends, their stream just sucks | 13:03 |
ybit | oooh | 13:03 |
ybit | i see | 13:03 |
ybit | hurpa durp | 13:04 |
-!- lumos [~lumos@afbu181.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap | 13:09 | |
ybit | heathmatlock@gmail.com add me on google+ or spam me i don't care | 13:21 |
-!- lumos [~lumos@afbu181.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 255 seconds] | 13:22 | |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap | 13:35 | |
ybit | http://www.youtube.com/watch?v=EC5sbdvnvQM | 13:39 |
ybit | "1966 prediction of the home computer " | 13:40 |
uniqanomaly | is there some prediction of year when 100% humanity will be rational? | 14:04 |
Utopiah | +oo ? | 14:04 |
uniqanomaly | ;] | 14:04 |
mjr | nah, humanity will be destroyed before +oo | 14:05 |
Utopiah | by itself? | 14:05 |
seanstickle | By aliens | 14:07 |
seanstickle | Or super-intelligent cats | 14:07 |
mjr | super-intelligent cat aliens | 14:08 |
seanstickle | That is the 3rd option, yes | 14:08 |
seanstickle | We can refer to them as SICA | 14:09 |
Utopiah | sounds scientific enough | 14:09 |
uniqanomaly | I have question: can superstition minded religious people and more rational agnostics equally be called "humans" | 14:10 |
seanstickle | Yes | 14:10 |
seanstickle | Next question | 14:10 |
Utopiah | uniqanomaly: you might like #lesswrong | 14:10 |
uniqanomaly | i mean shouldn't be there different means to diversify | 14:11 |
uniqanomaly | Utopiah: I already do | 14:11 |
uniqanomaly | right | 14:11 |
kanzure | #lesswrong is banned in here | 14:15 |
uniqanomaly | kanzure: you mean like talking doesn't get anything done? | 14:16 |
kanzure | and various other reasons | 14:17 |
-!- uniqanomaly [~ua@dynamic-78-8-91-216.ssp.dialog.net.pl] has quit [Quit: uniqanomaly] | 14:19 | |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has quit [Quit: nchaimov] | 14:19 | |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap | 14:21 | |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has left ##hplusroadmap [] | 14:22 | |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap | 14:22 | |
jrayhawk | kanzure do you have a list of entities and topics of discussion that infuriate you somewhere | 14:29 |
jrayhawk | i ask for completely legitimate reasons and not just for fodder to troll you with | 14:30 |
kanzure | keeping a file like that doesn't sound healthy | 14:32 |
kanzure | 'angerfile' | 14:32 |
jrayhawk | ah, so you're saying self-quantification and analysis is unhealthy. interesting. perhaps i can interest you in this website, lesswrong.org? | 14:33 |
kanzure | why not a happyfile instead | 14:33 |
kanzure | i'm sorry but i've sort of given up against lesswrong, i don't know what to do to stop them | 14:33 |
jrayhawk | would it be filled with dancing kirbys | 14:33 |
mjr | I think it's .com though | 14:33 |
mjr | (which is too bad if the US some day decide to get someone extradited over it) | 14:34 |
jrayhawk | and also is subject to Verisign fuckery like SiteFinder | 14:34 |
seanstickle | "stop" lesswrong? Are they actually doing anything beyond writing articles that get submitted to HN? | 14:35 |
kanzure | seanstickle: http://siai.org/ | 14:35 |
kanzure | what i fail | 14:36 |
kanzure | fail fail fail | 14:36 |
kanzure | i clearly meant singinst.org | 14:36 |
seanstickle | That seems to be pretty much the same thing | 14:36 |
kanzure | no | 14:36 |
seanstickle | Except they all gather together to talk about the articles | 14:36 |
kanzure | lesswrong claims to just be about educating people about rationality | 14:36 |
kanzure | but in reality they are made up of singularitarians trying to rationalize their rationalizations about risk rationalization or something dangerous to that effect | 14:37 |
jrayhawk | not quite | 14:37 |
kanzure | yeah i botched that | 14:37 |
seanstickle | wha? | 14:37 |
kanzure | if i was more eloquent about this it wouldn't be as much of a problem | 14:37 |
jrayhawk | heehee | 14:38 |
seanstickle | Aren't they just Raelians, except with HAL standing in for the Space Aliens? | 14:38 |
kanzure | everything's religious with you.. :p | 14:38 |
kanzure | well | 14:39 |
kanzure | no, based on my experience they are more sophisticated | 14:39 |
seanstickle | religion is just a manifestation of a tribal culture | 14:39 |
kanzure | i am not interested in hearing your views on religion (sorry) | 14:39 |
seanstickle | So? | 14:39 |
seanstickle | Then don't bring it up | 14:39 |
seanstickle | Easy enough | 14:39 |
jrayhawk | so, SIAI has a problem in that hard AI takeoff is such an intractible problem that they lack good means of thinking about it, so they're hoping to generate more mindshare for the problem using a sort of rationalist religion to recruit people. | 14:39 |
kanzure | well boiling everything down to religion doesn't make for much of a conversation | 14:39 |
kanzure | jrayhawk: there's a lot of pieces missing in there.. like, | 14:40 |
kanzure | well i guess "rationalist religion" covers the fearmongering aspects and FUD | 14:40 |
seanstickle | I'm not sure "trying to rationalize their rationalizations about risk rationalization" leads to much of a better conversation | 14:40 |
kanzure | but their assumptions on actors/agents/decision theory/representation seems a little off somewhere | 14:41 |
kanzure | whcih again i guess can be grouped under religion | 14:41 |
kanzure | ok nevermind jrayhawk | 14:41 |
jrayhawk | Well, the religious part is that they think the sort of 'rationalist singularity' they're trying to jumpstart will be revolutionary and amazing | 14:42 |
seanstickle | rapture of the geeks indeed | 14:42 |
kanzure | heh their version of the precautionary principle is sorta extreme.. it's a proactionary-precautionary principle ("kill all the other AIs that are emerging") | 14:43 |
jrayhawk | (if you get a bunch of rationalists together discussing rationalism, they will be able to make themselves more rational faster and better, etc.) | 14:43 |
seanstickle | genocide of the AIs be damned, apparently | 14:43 |
jrayhawk | so, in a way, they're trying to combat AI singularity with human rationalist singularity, and that just seems dumb to me considering that rationalism isn't really all that empowering. | 14:44 |
seanstickle | Oh, the manichean dialectic is powerful with this one | 14:44 |
kanzure | jrayhawk: i feel that saying "they are just promoting rationality, and not actively doing XYZ" is somewhat dishonest.. because most of them, given some rationalization, *will* follow through with it and convince themselves that "oh, therefore we should make an ai that kills everything that's a potential threat to its global dominance" (etc.) | 14:44 |
mjr | that's rather disingenuous (or merely dumb) | 14:46 |
kanzure | nop it's "rational" | 14:46 |
kanzure | (okay maybe that's unfair :)) | 14:47 |
mjr | I was talking about your strawman | 14:47 |
jrayhawk | Or, I guess I should say, empowerement from rationalism has diminishing returns | 14:47 |
seanstickle | I like how all the visiting fellows are white guys, with the exception of 2 women and 1 asian. | 14:47 |
seanstickle | Cute. | 14:47 |
jrayhawk | And the places where the highest returns are being made are the places lesswrong is specifically unbelievably hostile towards. | 14:48 |
kanzure | mjr: i'm p. sure a particular person has published a lot on that scheme | 14:48 |
kanzure | jrayhawk: elaborate? | 14:48 |
kanzure | what is lesswrong hostile to, anyway? | 14:48 |
mjr | I'm pretty sure you're (possibly willfully) misreading said publications | 14:49 |
jrayhawk | Irrationalist religion. | 14:49 |
kanzure | mjr: that's possible, but i've spent a lot of time talking with these friends and i'm not convinced i'm misreading | 14:49 |
-!- QuantumG [~qg@rtfm.insomnia.org] has joined ##hplusroadmap | 14:50 | |
mjr | *shrug* "killing" is a blunt instrument that you just presume to be the one to use on possible competitor singleton-wannabes | 14:51 |
kanzure | okay, retarding them | 14:51 |
kanzure | might as well kill 'em | 14:51 |
mjr | there is sandboxing, if it can be shown to be effective in that scope (if not, well, we're screwed anyway *shrug*) | 14:52 |
kanzure | yeah i wouldn't let you sandbox me.. | 14:52 |
mjr | rather irrelevant if you would or wouldn't | 14:53 |
seanstickle | Ethical issues involved in killing AIs are not yet settled | 14:53 |
mjr | you'll be sandboxed too, or we're screwed ;) | 14:53 |
seanstickle | But I imagine there will be some | 14:53 |
kanzure | "my gun is bigger than yours! nuh-uh! times 1000! times infinity!" | 14:53 |
mjr | seanstickle, yeah. Meanwhile, AI's are especially easy not to kill, merely chuck away somewhere until they can be run safely | 14:54 |
kanzure | it doesn't matter if you are talking about ai.. might as well be talking about uploads | 14:54 |
seanstickle | mjr: I don't see that we have AIs yet, so I have no idea how easy it is to just chuck them away | 14:54 |
mjr | of course, but now we were talking about AIs | 14:54 |
mjr | seanstickle, ... | 14:54 |
kanzure | well you can shoot someone and they die | 14:55 |
mjr | They're data. | 14:55 |
kanzure | (they're running on biology and nucleic acids) | 14:55 |
kanzure | so there's your start. | 14:55 |
seanstickle | mjr: you're data too | 14:55 |
mjr | seanstickle, if you mean that you'll first have to _stop_ them running (along with any failsafes), yeah, that's the harder part, but it was pretty much presumed that there was the ability to kill. Ability to store while you're at it, well, okay, it's slightly harder in some circumstances, but a lot easier than with humans. | 14:56 |
kanzure | jrayhawk: i'm a little surprised "proactionary principle" isn't mentioned on lesswrong | 14:56 |
kanzure | aw crud | 14:57 |
kanzure | and the only mention of 'precautionary principle' is bgoertzel's article | 14:57 |
seanstickle | mjr: I have no idea how hard it is, and since we don't have any AIs yet, I'm not sure anyone else knows how hard it is either | 14:57 |
mjr | ... | 14:57 |
mjr | okay, be that way | 14:57 |
kanzure | http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_institutes_scary/ | 14:57 |
jrayhawk | haha | 14:57 |
kanzure | http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html | 14:57 |
seanstickle | mjr: submarines aren't fish | 14:58 |
mjr | computer programs are computer programs | 14:58 |
seanstickle | AI != computer program | 14:58 |
seanstickle | At least, this has not yet been demonstrated to be the case | 14:58 |
mjr | true, because not all computer programs are AI | 14:58 |
seanstickle | I don't know how to type a subset symbol | 14:59 |
seanstickle | But you know what I mean | 14:59 |
kanzure | \subset{} | 14:59 |
seanstickle | Ha | 14:59 |
kanzure | jrayhawk: "provably non-dangerous AGI" is exactly the signs of a precautionary principle | 15:00 |
kanzure | "SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that:" | 15:00 |
kanzure | "progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race." | 15:01 |
mjr | Fine, if you do an AI using squishy goo, it'll also be somewhat more harder to store than to kill. Here in the relevant world, we're talking mostly software, possibly some custom hardware but turingy anyway | 15:01 |
kanzure | "But SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk." | 15:02 |
seanstickle | mjr: we may be talking about software now, but there's no convincing evidence that a super-intelligent AI is possible with just mostly software. | 15:03 |
kanzure | "provably non-dangerous AGI 'is highly likely to lead to an involuntary end for the human race'" is sort of cheating, since they can claim "well it's only a 99.9999% chance" | 15:03 |
kanzure | in reality i do think that they are some who think it's a 100% chance | 15:03 |
seanstickle | mjr: although, to be fair, there's no evidence that it's not possible either | 15:03 |
mjr | it's trivially possible via church-turing thesis | 15:04 |
kanzure | i feel that my issues with lesswrong are mostly because of siai.. the rationalist religion stuff is just funny and wouldn't be of concern otherwise | 15:04 |
seanstickle | mjr: really? | 15:04 |
jrayhawk | Do you object to the Scary Idea, or what they're doing about the Scary Idea? | 15:04 |
mjr | (though said thesis is not formally provable, it's strength as a null hypothesis takes a lot to budge it) | 15:05 |
kanzure | jrayhawk: yes | 15:06 |
kanzure | i don't object to rationalist development but there is too much sickness in that particular community | 15:06 |
jrayhawk | Do you object to the Scary Idea because you think it's implausible, or because you're a misanthrope? | 15:06 |
kanzure | false dichotomy? | 15:07 |
jrayhawk | I mean the "unacceptably high probability of human extinction" being implausible | 15:07 |
kanzure | there's a number of reasons and i don't think i have one single biggest one | 15:07 |
kanzure | but | 15:07 |
mjr | apropos I don't object to the scary idea, but don't really find it scary because I'm a misanthrope ;) | 15:07 |
jrayhawk | Yeah, I'm a misanthrope, too. | 15:08 |
kanzure | "provably non-dangerous GI" is something i strongly object to | 15:08 |
kanzure | didn't someone do an existence proof of provably-non-dangerous-GI that totally failed? | 15:08 |
mjr | um, does that tell us something? | 15:08 |
jrayhawk | I don't see how that would be evidence of anything, yeah. | 15:08 |
kanzure | well it's evidence of a lack of foundation | 15:09 |
mjr | Anyway, sure provably non-dangerous GI may turn out to be a pipe dream. Good that there are people smoking that pipe, though, 'cause if so, we're toast ;) | 15:09 |
kanzure | including their inabilities to define intelligence in anything other than AIXI or whatever | 15:09 |
jrayhawk | Uh, risks are always speculative. That's what makes them risks. | 15:10 |
jrayhawk | They're saying "we don't know enough about this and it could kill us", and you appear to be saying "they don't know enough to say they don't know enough"? | 15:10 |
kanzure | so why would i waste my risk tolerance or risk points on a precautionist stance | 15:11 |
jrayhawk | So it seems to be more what they're doing about the scary idea that you object to, then. | 15:12 |
kanzure | really? | 15:13 |
kanzure | i'd definitely say that there's some probability between 0 and 1 non-inclusive that any ai could totally eliminate humanity.. sure | 15:13 |
mjr | and I'd say you just seem to grossly underestimate said risk, but I'll just presume you've read enough of the relevant material to not bother to try recalibrating you | 15:15 |
kanzure | mjr: maybe it would be helpful for you if i just assume that ai will destroy everything, and continue this discussion on that premise | 15:18 |
jrayhawk | The more interesting part is that a transhumanism singularity is far more likely to result in a sort of "humanity" that has some prayer of managing and surviving an AI takeoff as it happens than a rationalism singularity. | 15:19 |
jrayhawk | So what Kanzure does every day is already a good risk-management practice. | 15:19 |
mjr | nah, I'm going to be so helpful that I'm gonna fall asleep soon, thereby ending this waste of time for now | 15:19 |
kanzure | jrayhawk: where "a rationalism singularity" would be something abuot making people more aware of risks and decisions that may or may not cause an ai to launch paperclips | 15:19 |
kanzure | ? | 15:19 |
fenn | i wonder if a hardware project would qualify (i.e. pipetbot) | 15:19 |
fenn | for the diybio grant | 15:19 |
kanzure | fenn: i'm still trying to figure out if it's "business-only" | 15:20 |
kanzure | fenn: did you see L001? | 15:20 |
kanzure | er, LH001 | 15:20 |
QuantumG | I still haven't seen anyone with a hope in hell of making a human-level intelligence | 15:20 |
kanzure | QuantumG: your mom and dad | 15:20 |
jrayhawk | haha | 15:20 |
kanzure | unless you're just a .. oh my god | 15:20 |
QuantumG | you haven't met my brothers | 15:20 |
kanzure | i've met jrayhawk's brothers O_o | 15:21 |
jrayhawk | Both of them? | 15:21 |
kanzure | well | 15:22 |
-!- seanstickle [~seanstick@c-98-218-2-48.hsd1.dc.comcast.net] has left ##hplusroadmap [] | 15:22 | |
kanzure | there's still this non-neglible chance that you are all the same person | 15:22 |
kanzure | but no i've only met steve i guess | 15:22 |
jrayhawk | cunning disguises | 15:22 |
kanzure | master of them? | 15:22 |
QuantumG | in any case, I think people who make cakes from a box mix have a greater claim to have "made" that cake than most parents do to their children. | 15:23 |
kanzure | i can't stop listening to http://www.youtube.com/watch?v=9HSj-2shbqY :( | 15:23 |
kanzure | bbl.. bbq | 15:23 |
jrayhawk | a "rationalism singularity" being one where getting smarter to help eachother get smarter results in in smarter people faster (even though rationalism is more the art of doing 'less wrong', which, again, involves diminishing returns) | 15:23 |
kanzure | jrayhawk: oh there's also all the control-of-control-of-control stuff i forgot to consolidate here | 15:25 |
kanzure | jrayhawk: "resulting in smarter people".. so.. at least some aspect of transhumanism? | 15:25 |
jrayhawk | It's not really transformative, though, it's just making the best of what we've got. | 15:26 |
jrayhawk | While I'm sure transhuman elements will be drawn into it, they obviously aren't the primary focus. | 15:27 |
kanzure | bbl for realz | 15:27 |
QuantumG | and then there's this http://www.youtube.com/watch?v=AUQG9PhDNnk | 15:30 |
jrayhawk | this guy says a lot of words | 15:34 |
jrayhawk | i wish he would say fewer | 15:34 |
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has joined ##hplusroadmap | 15:39 | |
flamoot | anyone | 15:40 |
jrayhawk | hello | 15:40 |
QuantumG | http://www.youtube.com/watch?v=4VBPM67ddY0 is gold too | 15:45 |
-!- nsh [~nsh@wikipedia/nsh] has joined ##hplusroadmap | 15:45 | |
fenn | mmmm.. sandwich clamps *drool* | 15:51 |
fenn | you can use a CD-ROM laser for the pulsed laser microfluidics pump | 15:52 |
fenn | so basically free | 15:52 |
fenn | also that's interesting for DIY bubble-jet perhaps | 15:53 |
nsh | sandwich clamps? | 15:56 |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Quit: Leaving] | 16:13 | |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap | 16:50 | |
-!- eridu [~eridu@gateway/tor-sasl/eridu] has quit [Ping timeout: 250 seconds] | 16:55 | |
-!- dbolser [~dmb@bioinformatics.org] has quit [Ping timeout: 250 seconds] | 16:59 | |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Quit: Leaving] | 17:07 | |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap | 17:08 | |
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has quit [Quit: Computer has gone to sleep.] | 17:24 | |
fenn | 17:26 | |
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has joined ##hplusroadmap | 17:26 | |
fenn | i can make a business around pipetbot | 17:27 |
fenn | might have to redesign it from scratch though; ultimaker might not be happy if i used their files for commercial gain | 17:27 |
fenn | lets make a blog called "more right" | 17:30 |
fenn | mistakes are the best way to learn | 17:30 |
-!- eridu [~eridu@gateway/tor-sasl/eridu] has joined ##hplusroadmap | 17:53 | |
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has quit [Quit: Page closed] | 18:07 | |
-!- eudoxia [~eudoxia@r190-135-39-218.dialup.adsl.anteldata.net.uy] has joined ##hplusroadmap | 18:13 | |
-!- eudoxia [~eudoxia@r190-135-39-218.dialup.adsl.anteldata.net.uy] has quit [Ping timeout: 252 seconds] | 18:22 | |
-!- JayDugger [~duggerj@pool-173-74-79-43.dllstx.fios.verizon.net] has joined ##hplusroadmap | 18:30 | |
fenn | "LH001" is a poor name choice | 18:43 |
fenn | also, if that was in reply to me talking about pipetbot, they do different things | 18:44 |
-!- JayDugger [~duggerj@pool-173-74-79-43.dllstx.fios.verizon.net] has left ##hplusroadmap ["Leaving."] | 18:50 | |
nsh | who are you talking to? | 18:59 |
QuantumG | fenn don't need anyone else to talk "to" | 19:00 |
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Read error: Connection reset by peer] | 19:09 | |
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has quit [Quit: Computer has gone to sleep.] | 19:17 | |
nsh | right | 19:25 |
foucist | kanzure: so are you a startuper | 19:30 |
nsh | just an upstart | 19:31 |
-!- eridu [~eridu@gateway/tor-sasl/eridu] has quit [Ping timeout: 250 seconds] | 19:41 | |
-!- jmil [~jmil@2001:468:1802:e148:223:32ff:feb1:9dfc] has quit [Quit: jmil] | 20:15 | |
augur | happy tesla day! :D | 20:28 |
kanzure | foucist: i have a few projects :p | 20:32 |
kanzure | and i've been around the block i guess.. | 20:32 |
kanzure | fenn: what's the difference between liquid handling and liquid pipetting? | 20:33 |
foucist | augur: let's commemorate tesla day by pumping electricity into the ground and using that to power all sorts of devices all over the world! | 20:37 |
kanzure | let's use the earth to electrocute the moon | 20:38 |
foucist | ? | 20:38 |
kanzure | you have to think like tesla: grand scale shit | 20:38 |
augur | GRAND SCALE YEAH | 20:39 |
augur | FUCK YOU, MOON | 20:39 |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has quit [Read error: Connection reset by peer] | 20:48 | |
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap | 20:49 | |
-!- nsh [~nsh@wikipedia/nsh] has quit [Ping timeout: 260 seconds] | 21:05 | |
-!- jmil [~jmil@c-68-81-252-40.hsd1.pa.comcast.net] has joined ##hplusroadmap | 21:11 | |
-!- mayko [~mayko@71-22-217-151.gar.clearwire-wmx.net] has joined ##hplusroadmap | 21:12 | |
kanzure | fenn: well, joe delivered on alex kiselev getting the money | 22:16 |
kanzure | so when he says he can get this he's probably right.. but likely not on his idea | 22:17 |
kanzure | it'll be easy to line up aubrey to "advise" or whatever to throw some names on board.. of.. uh, whatever it is | 22:17 |
kanzure | i don't see why a pipetbot would be of particular interest though? | 22:17 |
kanzure | like if it's straight business, the obvious thing to do would be to copy some old business that already works | 22:18 |
kanzure | if it's something that's supposed to be achievable, useful, not-necessarily-turned-into-a-company-or-else, i think you or even i would have much better ideas? | 22:19 |
kanzure | heh now i am getting ads for "ePCR" or "electronic patient care reporting" | 22:39 |
-!- mayko [~mayko@71-22-217-151.gar.clearwire-wmx.net] has quit [Ping timeout: 240 seconds] | 22:40 | |
kanzure | why would they choose the same acronym | 22:40 |
-!- BaldimerBrandybo [~PixelScum@ip98-177-175-88.ph.ph.cox.net] has joined ##hplusroadmap | 22:47 | |
-!- PixelScum [~PixelScum@ip98-177-175-88.ph.ph.cox.net] has quit [Ping timeout: 276 seconds] | 22:51 | |
-!- jmil [~jmil@c-68-81-252-40.hsd1.pa.comcast.net] has quit [Quit: jmil] | 23:18 | |
--- Log closed Mon Jul 11 00:00:37 2011 |
Generated by irclog2html.py 2.15.0.dev0 by Marius Gedminas - find it at mg.pov.lt!