hugme: (Default)
[personal profile] hugme
These are supposed to be scientist!?!? what the fuck? do they not have a clue about how technology works or what?

what a bunch of fucking morons... if I see any regulation laws for this you can bet your ass I am going to start working on a product that breaks them... dumb fucks...

http://www.timesonline.co.uk/article/0,,2087-2230715,00.html

Are you kidding?

Date: 2006-06-19 10:43 am (UTC)
From: [identity profile] the-hueman.livejournal.com
So let me get this straight, scientists in the field want to ensure that consicious machines won't harm humans and you're determined to hack that effort?

So you want to program machines to be able to harm people?

When did you become a mad scientist?

Re: Are you kidding?

Date: 2006-06-19 11:50 am (UTC)
From: [identity profile] hugs.livejournal.com
no, that is NOT what they are trying to do... read the article more closely... they are trying to put restrictions on how robots are programed because they are worried about the robots being smart enough to take over. They watch a lot of hollywood crap and think it's real.

that is NOT how computers work, at all. That is not how our society works. Robots/computers do what they are programmed to do, there is not consciousness there will never be one. There will never be a time when we don't need programmers, people to give computers/robots directions. Robots can never and will never think for themselves. It's bullshit.

'programing machines to be able to harm people' we already have these, lots of them, we have for a long time. People still get killed by threshers, paper mills, etc... but neither these nor any other machine can make a decision.... they all take input and calculate that input by who is running them.

The laws this article is talking about have to do with specific programing methods, not the result of these methods, this is what is wrong. we already have laws against building things specifically designed to harm or kill people, we don't need new laws telling us how we are allowed to program things because idiots like this don't understand how programing and computers work.

They are just a bunch of guys trying to make a name for themselves by preying on the fears of others... it's not only sad but dangerous...

Re: Are you kidding?

Date: 2006-06-19 01:55 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
Dude, you're so last century....

That is *EXACTLY* what they are trying to do. They are asking for supplemental programming guidelines to be considered so programmers take more responsibility for the outcome of complex interaction of code objects.

Let's take a physical program problem and see how unexpected consequents arise from real-time processing of highly complex objects:

Magic:The Gathering is a card game that provides instruction objects (cards) that interact according to a set of established rules, via two engines. The first is the rules engine (game mechanics), and the second is the inference engine (the human player, acting according to conscious motivations). The order the objects interact is quasi-determinate, but it is incredibly complex to calculate EVERY possible interaction. Fortunately for 'fun', the inference engine is not so powerful as to evaluate every potential, and instead sees only slightly forward, both from expanding complexity, and from the vagueness of having another engine whose motivations are not completely known.

Game play goes until one of a set of states is achieved. Thus, this is a state-machine, with pathways that can be plotted. The game designer knows this to some extent, and by manipulating the objects and rules engine they can maximize possibilities, while providing for state closure and a determinate outcome. They are aware that they can't see EVERY interaction, but they can try to balance the design to avoid lopsided play.

HOWEVER, early on in the games design a flaw was created (the first-draw win) that made it possible for the first player, in certain initial states, to win without allowing the other player any chance at all. This was an unforseen outcome, revealed well after the game was released. Once the combination was discovered something had to change or the game could have suffered (and, likewise, sales, which is a strong motivator for a company). They could have removed the cards, but this was not reasonable, having sold some already. Instead, the rules engine was modified to prohibit this outcome with a special exception.

**THIS IS ENTIRELY WHAT THIS GROUP SEEKS TO DO WITH ROBOTIC DEVICES**

We have very few machines that are programmed to be able to harm people -- they are programmed to do specific tasks, without regard for potential damage to people. The difference? The programmer didn't sit down and ask "how, exactly, could I weld this seam, but keep open the option to burn a hole through Bob's left testicle?" The problem is not action, but lack of forethought (and Bob's weird kink with welding robots). Most of the time this isn't an issue as general use of computing devices is very task driven. Increasing complexity of interactions, though, is making it necessary to consider these questions before it becomes so prohibitive that new code is impossible to write!

But, if we have computing power and the ability to add additional inputs, it is a much more trivial task to add additional decision trees that allow suspending or aborting programmed processes until a risk situation ends. This makes the robot 'aware' of certain potential problems, and provides the ability to take mitigating actions.

EVERY COMPUTER THAT EXECUTES AN IF/THEN STATEMENT MAKES A DECISION. The decision is limited, sure, but the state of the machine is not necessarily known by the programmer at the coding state, nor at compile time. If it were, the IF/THEN could be optimized out of the object code. Every machine, be it designed by people or by evolution, that takes input and provides different output as a result, makes a decision.

Philosophers have tried to determine what makes the complex interaction of rules shift from "just a machine" into consciousness, and it isn't something we easily wrap our brains around. If we eventually develop a machine with as many rules and inputs as we have, it may be impossible to determine the difference between its output and our own, save for the possibility that it will be able to do the processing much faster. When that is the case, we need to be assured that some of those rules we've implemented include protection of ourselves from what those decisions might be.

Re: Are you kidding?

Date: 2006-06-19 02:12 pm (UTC)
From: [identity profile] hugs.livejournal.com
no matter how complex the equation it's still an equation derived by that of a programer. Whether it be a primary or secondary source. There are already laws prohibiting this. Making laws restricting types of programinging is pointless and a step backwards.

Your example of magic is a good one. There were flaws exploited in the game but they still had to be exploted by a being, not a machine. someone who can make decisions.

an 'if/then' statement is NOT a decision it is a pathway. In the case of a decision there are two possible outcomes, as sentiant beings we are able to chose which of those outcomes we would like. Computers do not and will not ever make decisions like this, there are not 2 outcomes, it's a gate, the gate is either pointing one direction or the other, there was no decision as to which gate the computer would take, it HAD to take the path predetermined by the programer. The programer made the decision to have that gate pointing in this direction, the computer did not make a decision.

bring up all the crazy philosophy you want... its still a computer, an inanimate object, there is no mind in it, it's just a bunch of electronic parts doing exactly what it's programer told it to do, it cannot make a decision to do any more or any less. Anyone who tells you differently believes too much in hollywood.

Re: Are you kidding?

Date: 2006-06-19 04:03 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
No matter how complex the lifeform, it is still a combination of quantum equations derived by that of a creator (or happenstance, if you believe such). And, yet, we make laws that restrict lifeforms all the time....

You are most decidedly wrong about basic programming mechanics. First, a decision is defined in computing (and, most everywhere else), as the determined active choice of one entity (choice, action, opinion, etc) from a set of known alternatives (options), after reviewing a selection criteria against known states. If you made a decision, you evaluated several pathway possibilities, and based on some criterion, selected one.

If/then statements are nodes of decision trees. It is NOT "a pathway", but multiple pathways. Or, to use the language above, it is a collection of known alternatives. Only at the decision step can the pathway be determined. We have not deviated from the earlier definition. Computers not only CAN make make decisions like this, they do so millions of times a second, based on their state within the engine. There are multiple possible outcomes until the tree is collapsed. Had the path been predetermined by the programmer, and realized as a path that MUST be taken, the optimizer or programmer could have removed it entirely without consequent.

So, you are wrong -- the programmer does NOT determine which pathway to take. The program provides alternatives, and rules for selecting between them. The input to the decision tree is what causes the decision, and THAT is something that may not be programmer driven. In fact, it could be completely driven by data the programmer will never understand. You are just as equally programmed by the alternatives you understand before making anything you call a decision, along with inputs from culture, education, and experience. All these are quantifiable, in some fashion. That doesn't make us significantly different from a complex engine.

You may be arguing that the list of known alternatives will never reach significant complexity, but I will dispute that using data from four years ago. You may attempt to argue that the engines necessary to weigh between more than two alternatives are beyond calculation, but I will dispute that with data from twenty years ago. You may be arguing that there is something mystical that makes us different from machines, inherent to our being, but when you do, you'll be doing so from a position entrenched firmly in faith.

You would do well to take courses in object theory, programming concepts, and structures and design. Once you realize that the way two or more objects interact can be well beyond the intent or plan of any programmer, you begin to see how life and computing devices are not that divergent.

Remember, life is just an inanimate object that has a highly complex set of rules. That we don't fully understand the rules doesn't change things. Get enough objects, some with rules we don't fully understand, and the outcomes are definately difficult to predict. This is why it is a good idea to get rules together in advance, so that we encourage programmers and software designers to consider complex object interaction, and to provide frameworks to make THEIR objects perform within safe constraints.

I have no interest in Hollywood, but I have seen the same highly complex real-time parallel program run twice with different outputs. The answers, while derived from what the programmer intended, exceeded the programmers expectations.

In sentient organisms, we call this "my kid did something I, the parent, didn't think they could!" Show me how this is in any way different from a computing device with multiple programmers?

Chaos Theory

Date: 2006-06-19 04:27 pm (UTC)
From: [identity profile] melonaise.livejournal.com
And just in the realm of nonlinear differential equations... You can have a differential equation on paper, made up entirely of familiar functions, but you can be entirely incapable of predicting the behavior of a solution. If you're lucky, you might be able to say, "Sometimes it's around this number, and sometimes it's around that number, but I really couldn't tell you when it's near which. Sorry." And all that unpredictibility is encapsuled in just one line of math-- the field that's supposed to be the most deterministic of all.

I have to say the article was pretty annoying, with its connections to sci-fi and paranoia. It came across more as "We're doing this because Isaac Asimov said to" rather than "We're doing this because it's a natural extension of good programming practices."

Re: Chaos Theory

Date: 2006-06-19 04:34 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
I'll grant that, but I wonder how much of that was the reporter, and how much was the committee?

Re: Are you kidding?

Date: 2006-06-19 05:42 pm (UTC)
From: [identity profile] hugs.livejournal.com
no... What you are saying is that my shovel decided to get up and start digging a hole.. no, my shovel did not decide anything, it's inanimate, I decided to use my shovel to dig a hole.

When a person makes a decision they may have things laid out in front of them, they have sever possiblilites, even if every fact wants them to move in one direction they have the option to make a decision to move in the other direction. A computer CANNOT do this. It is given input, it takes that input and it creates output, it's no different than my shovel. No matter how advanced that shovel may be it does not make a decision to dig.

To have a decision you have to have more than one choice. A computer does not have a choice it MUST follow what it's programed to do. A shovel will only dig a hole if I point it in that direction, there aren't 2 choices for the computer to make, only one. You cannot make a decision if you only have ONE choice.

Think about this, we have never built a true random number generator. Each 'sudo' random number generator we have created still has to be based on enviroment variables, or equations etc... the computer cannot just 'pick' a number at random, it cannot make a decision.

Re: Are you kidding?

Date: 2006-06-19 07:07 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
Logical Fallacy: Straw Man. I have never said your shovel decided to start digging a hole. That is not up for debate. You lose the point.

(And, if you cannot distinguish between a computing device and a shovel, you've got a lot of study to accomplish. To be fair, there ARE fully automated digging engines that follow guidelines in doing positional digging, and they are described as taking input parameters before DECIDING WHERE BEST TO DIG HOLES. SO, if you want to debate that an automated shovel can decide to dig a hole, we can go there, but you've already lost before we start.)

When ANYONE OR ANYTHING makes a decision, they have things laid in front of them, and select from one of those options based on their internal motivations (program), their circumstance (environment variables), and the current situation (input). This is what a computer does every time it reaches a conditional branch. This is what YOU do. This is why awareness of the potential possibilities is crucial in making good decisions -- you cannot decide to elect the best candidate if they aren't running, or never made onto the ballot, without knowing they are an option. Neither can a computer program decide among outcomes it does not know. But, it CAN decide among those it does.

Every programmer, every place, has written a program, executed it, and said "huh.... that's funny" when it did something unexpected. The intent of the programmer, and the actual execution of the program, can vary. In addition, the programmer of a device driver never intends to have buffer overflows, but if they make faulty program choices, the computer can take a driver OUTSIDE what the object was programmed to do, into areas influenced by object interactions. You might argue that the computer is still following the program, but I would say the same of you and your own internal motivations.

Which leads to your last point -- Not only have we never built a random number generator, YOU can't pick random numbers, either! Try, and I'll argue that the situations that caused you to spout out "69" are based on the quantum calculations that caused you to exist, and on circumstances and predelections of your character and mind, along with trends you have towards numerical selection. In short, true randomness is impossible. What is your point, then? Did you decide a number, or not? What makes your choice a decision?

So, back to your shovel. Mine is automated, and has this program:

if dig_a_hole = true
outdev "START DIG_HOLE"
else
outdev "START IDLE"

What is the output of that program?

Re: Are you kidding?

Date: 2006-06-20 06:36 am (UTC)
From: [identity profile] hugs.livejournal.com
Ok, so then you are under the belief that no matter what we do in our lives it was all predetermined in a set of equations and there is nothing we can do to change that... so in effect if you are destin to do something it will happen and there is nothing you can do to change that.

You are saying there is no such thing as choice at all... That we as humans already have a path set out in front of us that we are going to follow. Wow, that really gets rid of any thought of religion doesn't it.

Well, if you honestly believe that then there isn't really any more I can say.

But I would have to disagree

No matter how complex you make a shovel, it's still a shovel. it's just has a different operations manual, it takes different inputs and provides a different level of outputs. A robot is the SAME THING. it takes inputs and gives outputs. it does not make desions, it doesn't have feelings, it can't think. It's a freaking shovel. It is given an input and provides an output, it cannot make a choice, it cannot make a decision.

Also I am one to believe that we DO make our own choices, we are in control of our own lives. We can make changes to our 'destiny'

Re: Are you kidding?

Date: 2006-06-20 01:51 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
Again with the straw man -- I have never said that I believe in predestination. There is a concept known as arbitrary outcome, which is a result of highly-complex equation interaction. Given two highly complex systems created from familiar subsystems and formulae, it may not be POSSIBLE to know the solution in a meaningful way. And, since subsequent interactions may be predicated on that solution, it may never be reasonable for an observer inside the system to identify an outcome in a predetermined way. Hell, it may not be possible to know the outcome from the outside, either (see [livejournal.com profile] melonaise's earlier math example). This is known in both deterministic non-polynomial questions and non-deterministic questions. Look up Turing's Halting Problem and Godel's theorems for examples of a deterministic way to not know where you are going.

But, yes, predetermined destiny can appear to revoke the need for religion (but, not necesarily -- the cosmic watchmaker philosophy allows for religion as an exercise). I don't see why this is a problem -- are you proposing that religion is necessary? Or, have I suggested a strawman for you? :)

So, given the cascading equations from the Big Singularity until now.... Can we still make (or appear to make) a choice? Yes. Is it a decision? That's debatable -- one of the fundamental ideas behind decision making is the knowledge of alternatives, and a policy for selection of a final entity. Choice is a superset of decision, in that it can include non-deterministic inputs, where a decision really can't. To decide, you evaluate rules and input. To choose, you might decide or you may make a random selection among equals (which is inherently non-deterministic).

Can a computer be programmed to use a similar non-determinism? Certainly -- this is done in real-time modelling all the time, in fact. But, it's ugly, and it is arguable that the non-determinism of inputs is illusory. The whole point of quantum computing is to remove the illusion of non-determinism and preserve solvability, and I expect to see more research into that barrier.

So, it comes to this: you will have to show me how you make a decision that differs from what a computer does. You will have to show me what a feeling is, in a meaningful way, so that it cannot be emulated in software. And, you'll have to define thinking in a way that I cannot program in a highly complex tool, for me to believe that a computing device will never think.

In short, you are stating that computer science is at its zenith, and will never achieve new solutions or constructions. I will have to disagree, and I easily see a point where the idea of complex constructions and interactions may create the suggestion of self-awareness. This is the concept of interface in computing -- if you look like a rock, and I manipulate you like a rock, and in all ways you seem to be a rock, you are a rock to me. It doesn't matter that you are the projection of a strange lifeform from another dimension into my own; all I need to know is "rock" to interact.

Thus, emotion may someday be quantifiable sufficiently to allow a device to act in all ways to an outside observer as if emotions were felt, and that the outside observer may be an object within the same construct (a self-aware routine, for instance). The only remaining way this may be distinguisible from a 'real' emotion is if it is rooted in a biological event, something I don't think we're going to stick into robots for a while. Does that make it any less an issue? Not at all.

And, that's the point of the article, all around -- If we accept that sufficiently advanced technology is indistinguishable from magic, it is equally reasonable that on the way we will reach a state where the technology is indistinguishable from the familiar forms of ourselves.

Re: Are you kidding?

Date: 2006-06-19 09:43 pm (UTC)
From: [identity profile] the-hueman.livejournal.com
Bravo!

Well said.

Date: 2006-06-19 02:22 pm (UTC)
From: [identity profile] nausved.livejournal.com
Aw, jeez....

The only robot I could imagine making any kind of choice for itself would be a human outfitted with robot parts. But that would still be a human, not a robot.

Robots will never, ever be "alive" or "conscious". Not only do I believe we'll never reach the level of technology to create life, let alone a computer, as complex as a plant or animal, but I really don't think it's even physically possible. Something that complex would almost undoubtedly be reliant on the same chemicals and processes as every other lifeform on earth. That means it would have to be made up of carbon, making the creation organic and therefore not a robot.

Silicon does have a similar electron configuration to carbon, but it's too large to be as versatile. I really can't see it being the basis for life--at most, nothing more complex than the simplest of bacteria. And even if silicon bacteria ever did come to exist, I wouldn't exactly call it a robot.

Date: 2006-06-19 04:28 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
What justification do you have that complexity requires carbon as a basis? Why not silicon? Why is versatility required for complexity? Versatility promotes alternatives, but at the core of complexity is a limited subset of the infinity that those alternatives encompass. Could it not be true that the bridge between sentience and servitude is less complex than would require a carbon footing?

While I would agree that naturally-occuring complexity relies on carbon (being the dominant complex-chain chemical with four unpaired electrons in the outer orbital), I do not eliminate the possibility that a manufactured entity could enjoy previously unseen complexity in the silicon arena. A silicon-based analog to benzene could be produced mechanically -- would it do similar things, or just be a framework? I cannot say, but I can't claim it doesn't, either.

Even so, we make carbon-fibre wings for aircraft, making them organic. Does that make them any less manufactured? No -- And a robot crafted from graphite and diamonds is no less manufactured, either. What matters is the nature of its creation and programming. For that, the underlying materials are less significant. I don't see why an organic robot is unreasonable in your worldview.

Our concern, at the core, is NOT that a robotic or computing device be alive, but that it be self-aware. This is a much lower hurdle.

Date: 2006-06-19 07:07 pm (UTC)
From: [identity profile] nausved.livejournal.com
I don't have proof, of course, but I have evidence and theory. All life that we know of is carbon-based. All of the most complex entities we know of, which all happen to be lifeforms, are carbon-based. Self-consciousness, as far as we can tell, requires a brain or something very similar to a brain. The brain runs on complex chemical process, which ultimately rely on carbon-containing molecules. The reason that carbon is so useful is because it can directly connect to up to four different atoms at a time, and if it connects to other carbons, they can form a theoretically infintely large structure. It is just this characteristic of carbon that lends to DNA strands. The longer the DNA strand, the more information that can be stored, and the more information that can be stored, the more complicated its host can be. I am not saying that an intelligent computer would necessarily require DNA, but at the very least it would require some sort of miniscule information storage like DNA or protein.

The only other elements that can link to as many atoms, and thereby ultimately carry the most information, are silicon, germanium, tin, etc. But these other atoms are much larger than carbon--silicon's molecular weight is over twice that of carbon's, for example--which makes them much weaker. Realistically, it would be very difficult to form large silicon molecules because its bonds would be very apt to break.

Perhaps you perceive of a machine whose "brain" would be on a much larger scale, rather like the computers of today. But I really don't think that would be physically possible. The cortex, the source of our consciousness based on studies of brain damage, has a great deal of complexity in a very small space; high-speed messages, much faster than a computer can manage, are a must.

Difficulty in understanding a subject does not necessarily mean that that subject is more complex than others, but that is usually the case. We really know very little about the brain, especially in the area of consciousness, while we seem to have more or less figured out almost all other life processes. We have even managed to create protobionts--the forerunners of all life on earth--from scratch, and I won't be surprised if someday we create RNA entirely from non-living elements. I would certainly say that creating life is a considerably lower hurdle than creating consciousness.

"I don't see why an organic robot is unreasonable in your worldview."

How do you define robot? I wouldn't claim that a machine made from diamond wasn't a robot, but I do think there comes a point when we've stopped talking about inventing intelligent robots and have started talking about inventing new lifeforms.

Date: 2006-06-20 02:21 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
Good points!

I define a robot as a created servile automaton that is fully formed at inception, or requires only minimal augmentation. This differs from life, where you grow physically larger as a part of the process, and from art/manufacturing where the end result is not intended to be a service entity.

And, yes, I agree that there are different hurdles -- I'm not sure about life vs consciousness, though. Maybe the fact that we are engineering cells now defeats my point, but I think that large complex systems (potentially nonportable as a result) may achieve a self-aware state when the supervisor constructs are added. I doubt it will materialize without such a self-supervising program, though, until we get programs writing other programs.

My idea for knowledge location, though, is on a highly distributed basis. Just as we have fast-twitch neurons that react to pain before we know it, I expect a robot would have neural operations across a larger volume that we need, making use of regional store for the high-speed messages, and a central storage for slower speed (just as we do with neurons, nerve bundles, and the brain). As well, the concept of a 'collective' intellegence and networking seems inherently reasonable, thus limiting the need for large-scale storage in the areas of the central storage. This makes robots much less portable, but that may be a necessity as we reach limits of storage density.

Date: 2006-06-19 02:38 pm (UTC)
From: [identity profile] nausved.livejournal.com
More comments about this article itself, because this is just too good:

"Robot may not injure human or, through inaction, allow human to come to harm"

I certainly hope they don't pass this law, because a lot of programmers and manufacturers will end up in court!

"'My guess is that we’ll have conscious machines before 2020,' said Ian Pearson."

Consciousness in 14 years?! We just figured out how to make robots walk! We better get cracking.

To critics who scoff that intelligent robots are a long way off, the roboticists easily riposte that machines can already exert surprising influence over our lives — think about the influence of the internet.

That's a rather poor argument. Climate and landscape have, for millenia, exerted perhaps the greatest influence on humans--considering, you know, it's what drove our evolution--but I highly, highly doubt we'll ever see a self-aware raincloud.

"'People are going to be having sex with robots within five years,' [said Christensen]."

Well, we know what's on his mind. Seriously, though, I bet people have already had sex with robots, considering people will have sex with anything penetrable.

Date: 2006-06-19 04:20 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
I fail to see why walking is such a big deal to cognative capacity... :)

Remember, this is a machine with awareness, not a robot. Putting that machine INTO a robot will be a later step.

Before I start arguing about self-aware climatospheres, we'll have to come to a decision about what self-awareness means, and what it means to be able to express that. I am not yet convinced that it isn't possible that the earth is a self-aware system, albeit slowly changing.

Date: 2006-06-19 06:27 pm (UTC)
From: [identity profile] nausved.livejournal.com
Please tell me if you define it differently, but I generally take self-awareness/consciousness to mean that one is aware of one's own existance or, at the very least, experiences the world.

I see consciously, for example. When I see something, I am aware that I am seeing. I experience sight. It is much, much more than a simple input device.

By comparison, I generally don't pump blood, fight infection, or grow older consciously, unless I specifically go out of my way to notice these things. Each one of these processes is still incredibly more complex than any supercomputer, but they in no way compare to the processes in the cortex.

Self-consciousness is an extremely advanced achievement. Most life on earth thrives unconsciously, and there is some debate as to whether even most animals are self-conscious (though I personally believe that most, at least among craniates, are).

It is true that we have no way of knowing for certain what others experience--as far as I know, I might be the only being in the world who is capable of thought. But, then again, we have no way of knowing whether the universe even exists at all; our senses could be lying to us.

In science, however, certain assumptions must be made in order to make advancements. We must assume that what appears to be true is true until there is evidence or proof of the contrary. So perhaps trees and planets are self-conscious, but there is no compelling evidence of this, and it would be silly for scientists to assume it to be so. (Hypotheses are another matter, of course.) Unfortunately, many scientists do make unfounded assumptions, but hopefully this will continue to be frowned upon and gradually weeded out.

Date: 2006-06-20 02:27 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
With complex object interaction I expect to see a 'traffic-cop' process that is aware of subprocesses, but interacts with them via an interface. The "self" doesn't need to know how the heart process runs, but will need to know if it stops or needs tweaking for speed. And, the traffic-cop might not need to know about messages between subsystems (thus the heart racing and adrenaline release after seeing an expecially attractive co-processor might be independent of the regulator up top, if so designed).

Thus, with this regulator, we have a process that will need to define boundaries between itself and the environment. Just as a 3yr old doesn't, but a 6yr old does, so will this entity need to describe itself as distinct from the surroundings. At *THAT* point, it is self-aware. What to do past that, though, I can't say. The critical point of understanding is communication -- if a tree is self aware, but can't let us know, how do we figure it out? It would seem we concur on that point.

I agree that we need to remove unfounded assumptions. I would debate, though that we should ignore unfounded opportunities -- that's where new research lies.

Date: 2006-06-19 02:47 pm (UTC)
From: [identity profile] girlvinyl.livejournal.com
I love this subject. It utter bullshit to imagine in our lifetime that any of this futuristic crap will even be close to materialized. There are SO many things that are basic that don't work properly. It may seem simple, but think about this...
Cable television has been around for MORE THAN 20 YEARS. More than 20 years! And it has advanced BARELY. Additionally it still BREAKS with measured frequency. Technology will never get to these highly advanced places because HUMANS create it and we are not that advanced. We landed on the moon 37 years ago. There has been almost no further progress on space living efforts. It's just so silly that people really belive anything can be 'taken over' RIDICULOUS!

The closest possibility I can see is biotech stuff which is implanted in the brain [they already do this regularly] and having it malfunction to the point where it kills you or POSSIBLY makes you go nuts. But then it's again not about some machine randomly killing people or having sex with humans or something. It's a human fault in origin.

Date: 2006-06-19 04:32 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
Surely we will never need more than 640 K of memory! And, who'd want to have more than 4 TV channels? Why would anyone want to cook food in 30 seconds? A phone you can carry in your pocket that can send images? That's Dick Tracy talk! It'll never happen!

Cable Television carried 12 channels on broad bands. Now, it is digital, carries hundreds of channels (including international broadcasts), and leaves bandwidth left over for internet data, telecommunications, and breaks less often than people do.

No, it isn't sentient, but it has come a long way.

Date: 2006-06-19 05:24 pm (UTC)
From: [identity profile] girlvinyl.livejournal.com
All of the progress you cite is basically quantitative. It can hold more or go faster. When talking about this whole nonsense of "computers taking over" those are the types advances that really haven't been made. Not to mention, humans are flawed, programmers are humans, programmers are flawed. Code is written by humans who are flawed, code is flawed, thus the machines will be too. I believe the current state of technology is barely hobbling along right now - any hugs advancements are far, far off into the future, if they ever do come. I actually believe in grey goo - but not for another 200 years.

Date: 2006-06-20 02:30 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
I think we're much closer than that with the basics, and it is NOW that we need to make sure that we've evaluated those flaws so that we don't end up in a cascading error that ends with Futurama's Santa Bot. :grin:

But, the idea of complex systems and regulators that can manipulate and take over functions for us.... that advance has not only been made, but exists as subcomponents in many homes (smart thermostats that connect to the power company to assist with systemic regulation, as an example). Each day we grow and move the boundaries. Each day we develop better processes, stronger systems, and better error trapping to remove the worst of the flaws.

Now is the time to evaluate where we are going, before we find we took the wrong exit a decade ago.

Date: 2006-06-19 07:16 pm (UTC)
From: [identity profile] theonetrueruss.livejournal.com
Hmm.. I think ya'lla re getting tied down in the semantics of wether or not the robot is making a decision or just following a pathway through a program unpredicted by the programmer. The reality is that a large system can reach a point of complexity where calculating every possible path through the code is not reasonable. In a simple system we can safely say that sertain actions are impossible without a programmer or operator's intent to do so however an unintended interaction of code in a robot could lead to disasterous effects.

The concept of the 3 laws or whatever set of robot rules someone wants to try and make are fallable as they will be implemented by the same programmers who do not know 100% how the rest of their code interacts. The concept is valid in the same way as putting rules and constraints on data tables is valid. The dba wants to make sure a program cannot do certain things so he/she sets up certain rules for a database. This is the same concept as robot laws. Everything still has to be setup and programmed correctly for it to work and there is always a way to hack past the check even if it means hacking the db and outright changing the rules.

Robot laws/constraints are not needed to prevent bad decisions. They are needed to prevent disasterous bugs from harming people. The average person is incapable of making this differientation so it can be talked about with the common man as though they are rules a robot must follow and cannot break no matter what it wants to do.

That is my unrequested opinion anyway.

Date: 2006-06-20 06:42 am (UTC)
From: [identity profile] hugs.livejournal.com
it is a thing. it falls under the same laws as any other THING. You are saying that it's the GUN that kills the person and not the abusive one behind the trigger. If I walk into a crowded area playing with a loaded weapon, it goes off and kills someone I will be arrested. I abused the safty of the gun. There are laws already on the books providing this.

Robots are no different. yes you can be stupid but they are a THING, an object. See my prievous example of a shovel... a robot does not and cannot think, it is a tool.

There are SO many things wrong with making laws like this! ARGH!!!

examples?

Date: 2006-06-20 11:46 am (UTC)
From: [identity profile] the-hueman.livejournal.com
John-

You really seem to be having a knee jerk reaction to this. I think you should pause and examine what exactly about this upsets you so much.

First off, I don't feel like beating a dead horse, but I think _dwivian_ has demonstrated the concept that given the complexity of computer code and the unforeseen results of complex interactions, it is wise to try and prevent actions that were unintended.

I find it hard to fault scientists for asking if there are ways to ensure human safety from unintended consequences. Don't simply dismiss them because of some bias that you have.

For example, you write:

"There are SO many things wrong with making laws like this! ARGH!!!"

Name seven.

So far everything you've written boils down to "computers are machines and machines can't think".

That may be true for NOW. It may not be true for much longer. I find it commendable that scientists are attempting to be proactive in design and development of futuretech. You seem so dismissive of scientists that are obviously much more knowledgeable about this than you are. Do you really think that these researchers would spend time on this if they really didn't feel it necessary? Perhaps they know something you don't?

Date: 2006-06-20 02:41 pm (UTC)
dwivian: (Default)
From: [personal profile] dwivian
When I was in high school, years ago, a person was killed by a gun that fired without any human pulling the trigger. It was unloaded by a trained professional, and stored, but a defect in the locker caused it to fall, and discharge a round that was missed.

Who is at fault? The inspector? The cabinet builder? The plastics company that created the latch? The bullet designer? The gun safety designer?

Or, is this a complex interaction in errors that created an unsafe condition, that might have been caught with one more oversight?

Every gun at that range is now stored in a back-tilting locker, with a metal clasp. The chamber is left open, and the cartridge removed. The inspector verifies the firearm is unloaded, and the storage manager inspects it a second time. There have been no accidents in the two decades since the new rules were put in place.

One life could have been saved by thinking about these rules in advance. But, it wasn't seen as reasonable, because guns don't kill people.

A robot is not merely a thing -- it is a complex interaction of systems. A shovel is a wedge and a lever. A robot can be hundreds of levers, wedges, screws, inclined planes, pulleys, etc. A feature of complexity is that one system plus one system is not always two systems. That's the way with robots.

And people.

Date: 2006-06-20 06:21 pm (UTC)
From: [identity profile] melonaise.livejournal.com
Well if Microsoft (http://news.yahoo.com/s/pcworld/20060620/tc_pcworld/126177) is getting involved in programming robots, we can bet there'll be some spectacular disasters.

Date: 2006-06-20 07:47 pm (UTC)
From: [identity profile] i-am-devilbunny.livejournal.com
Though I'm a Luddite and not very tech inclined, I always think: At one time mankind used to think the world was flat and the sun revolved around us. Just because one thinks it's impossible now, does not mean it's impossible in the future...
Page generated Apr. 8th, 2026 07:14 pm
Powered by Dreamwidth Studios