LostAbaddon
LostAbaddon

文章即魂器 Twitter:https://twitter.com/LostAbaddon NeoDB:https://neodb.social/users/LostAbaddon@m.cmx.im/ 长毛象:@LostAbaddon@m.cmx.im 个人网站:https://lostabaddon.github.io/

Dr. Strange Case News Digest

This is an old manuscript that I found in a forgotten folder in the process of looking for information. The story revolves around a traffic accident in the future - if artificial intelligence causes a traffic accident, who should be held responsible?

"Earth Daily"

... Whether Dr. Strange insisted on drone testing despite known security flaws in the program has become the core part of the case ...

... According to people familiar with the matter, who asked not to be named, Dr. Strange knew he had terminal lung cancer but didn't tell anyone, and his wife was pregnant, so there's every reason to believe it was a conspiracy. long-standing fraudulent behavior.

...Professor Dulick believes that if the human casualties caused by errors in the drone program are all responsible for the R&D engineers who write the programs, then this will greatly harm the development of the AI industry. Therefore, if Dr. Strange was unaware of the existence of a procedural loophole, his actions could not be characterized as suicide, and personal accident risk would therefore be valid.

Note: Does that remind you of The Daily Planet aka The Daily Planet? Yes, it was the newspaper of the Metropolis, where a reporter named Clark Kent worked.


"Small newspaper"

... Professor Ma Wen, an expert in AI theory, believes that AI programs are very complex and have the ability to learn and evolve by themselves, and it is impossible to deliberately leave a clear-cut loophole in advance.

...Humans have so far not known the role of each node in the huge neural network after self-learning, so it is impossible to make preset adjustments to the node threshold function in advance.

...the occurrence of this accident should be properly understood as an accident and not as deliberate fraud by a respected scientific researcher.

...the occurrence of this accident should be properly understood as an accident and not as deliberate fraud by a respected scientific researcher.

Note: I swear, the Trumpet really has nothing to do with Peter Parker's The Trumpet! PS: The horn can also be expressed in English trump~~


The Guardian

...AI critic Minsky believes that since the academic community is aware of the existence of adversarial examples of neural networks, and it is not clear whether such adversarial examples must not exist in nature, the use of all artificial intelligence products should be limited. even banned. Therefore, this catastrophic accident that killed 79 people, including Dr. Strange himself, can be completely characterized as a vicious accidental death caused by the neglect and inaction of the artificial intelligence research and development community to safety.

Note: "The Guardian" and "The Guardian" really have no relationship at all!


"Earth Daily"

... Dr. Syberth, chief AI researcher at Deraim, further elaborated that even the human brain has neurological deficits similar to adversarial examples, can we prevent humans from participating in any activity because of this?

...Professor Mitroen, CAO of Fabricat Company, believes that in addition to the possible mistakes of neural network designers when designing neural networks, network tuners are equally likely to use complex real-world data to train neural networks. Mistakes, such as biased loopholes in data selection...so we can't assume that Dr. Strange personally caused the mass death of humans, including himself.

...Professor Mitroen, CAO of Fabricat Company, believes that in addition to the possible mistakes of neural network designers when designing neural networks, network tuners are equally likely to use complex real-world data to train neural networks. Mistakes, such as biased loopholes in data selection...so we can't assume that Dr. Strange personally caused the mass death of humans, including himself.

Note: CAO is Chief AI Officer, chief artificial intelligence officer.


The Guardian

...Professor Milovsky emphasized that the design loopholes of the neural network system of artificial intelligence products, the sampling loopholes of the data used for training and training, and the extreme adjustment of personalities and behaviors in contact with people thereafter may cause problems beyond the designers. Expected uncontrollable changes that endanger the personal safety of product users, so artificial intelligence products must be banned immediately.

Note: He is definitely not the one who invented the Minovsky particle! Absolutely not!


"Small newspaper"

...Marvin even laughed, thinking that the idea of giving up AI products because of design loopholes, sampling loopholes, or self-adjustment caused by later contact with people may lead to unpredictable changes in AI, just like thinking that the DNA provided by parents exists. Sickness, schooling deviations, and later social contact with people can be as laughable as to be sterilized.


The Atlantis Post

...Professor Strong excitedly believes that it is extremely wrong and dangerously anti-human to compare the design, training and use of artificial intelligence neural networks to human birth, learning and social activities.

...Professor further believes that since artificial intelligence does not have the same consciousness as human beings, and all its behaviors are based on the shaping of human beings, then it should not be the initiator of behavior in the legal sense, but only as a tool. As a prop in behavior... Therefore, even if this accident cannot be characterized as Dr. Strange's deliberate suicide, it should be understood as a fatal accident caused by a vicious accident in the artificial intelligence R&D department of Fantasy Company as a whole.


The Kyoto Herald

...AI application pioneers Stark and Professor Banner retorted that the nervous systems of all animals, including humans, are "designed" and "trained" by nature, so whether it can be said that the criminal behavior of criminals is essentially What should not be attributed to man's crime but to be regarded as nature's crime against man?

... The judgment of the conscious subject that Professor Strong believes is a very sensitive and vague issue, and there is currently no consensus in the academic community on this issue. On this premise, it is an extremely irresponsible act to arbitrarily believe that the responsible party of the law must be human beings.

Note: It is definitely not because Iron Man and Hulk created Ultron in the movie "Avengers 2" that I wrote these two names, absolutely not! ! !


The Guardian

...Professor Minovsky denounced Dostark's confusion with Banner as extremely dangerous. The human nervous system is a natural product formed after tens of thousands of years of evolution, and is not "designed" by nature. Using a term like "design" in the relationship between humans and AI and nature and humans is a sinister conceptual confusion.

...and more importantly, the human nervous system is constantly evolving, not designed, a difference that AI neural networks do not have.


"Earth Daily"

... The announcement of Fantasy Company shocked everyone, and no one thought that the drone this time was the third-generation AI developed by AI itself.

…almost everyone in the industry and in the critics has joined in this wave of condemnation of Fantasy’s actions for violating the “AI-derived ban.”

Note: Obviously, this is an enhanced version of the "principle of reproduction" in the Laws of Robotics series. The original reproduction principle was that robots were not allowed to participate in the design and manufacture of robots unless the behavior of the new robot complied with the laws of robotics.


The Atlantis Post

...Professors Gibbs and Boltzmann issued a joint statement declaring that the conduct of Fantasy Corporation was ethically anti-human and anti-social. This kind of behavior delivers enormous power to uncontrollable equipment, which is a kind of inaction corresponding to responsibility, and thus is extremely irresponsible.

...Professors Gibbs and Boltzmann issued a joint statement declaring that the conduct of Fantasy Corporation was ethically anti-human and anti-social. This kind of behavior delivers enormous power to uncontrollable equipment, which is a kind of inaction corresponding to responsibility, and thus is extremely irresponsible.


"Small newspaper"

... Dr. Wien, CAO of Fantasy Company, believes that if people think that neural networks are just a derivation of the human will, then of course the neural network generated by the neural network is also a derivation of the human will. Practically speaking, this behavior is just a complement to the loopholes and inefficiencies that humans may have in the process of designing neural networks. The designers are still humans, but the neural network has been supplemented and developed by derivatives. This neural network is designed by humans or by human-designed neural networks.

...And if you think that the neural network designed by the neural network is dangerous, and the root of this danger lies in the absence of human supervision, it is tantamount to acknowledging that the neural network designed by humans is subjectively independent.

...so the opponents are caught in an embarrassing and contradictory theoretical predicament of denying AI's independence from humans and condemning AI's independence from humans.


The Atlantis Post

...As for Dr. Wien's statement, Professor Strong sneered that it was a powerless quibble that brought shame on him.

... This is a question of the accumulation of uncontrollable factors that may exacerbate the possibility of accidents. It is not a question of whether AI has an independent personality or consciousness - this question itself needs no further discussion.

... This is a question of the accumulation of uncontrollable factors that may exacerbate the possibility of accidents. It is not a question of whether AI has an independent personality or consciousness - this question itself needs no further discussion.

...not to mention that confusing these two types of issues may bring people the danger of AI having a human-like consciousness, it is said that this sophistry itself is also an evasion for the risk management and control of Fantasy Company in this incident. Responsibility for excuses.

...Finally, as the old saying goes: My courtier's courtier, not my courtier. How can the neural network developed by the neural network developed by humans be said to be the neural network developed by humans?


The Kyoto Herald

…Professor Pym, a recognized AI design guru, dismissed Professor Strong’s remarks. He believes that if this practice of designing artificial intelligence assisted by artificial intelligence is a kind of risk accumulation, then the practice of one R&D group using the toolkit developed by another R&D group in the human research and development process is also a kind of risk accumulation, then It is long overdue for the entire IT industry to be reorganized to ban all forms of third-party tool libraries and open source projects.


The Guardian

"...it goes without saying that the use of a tool that is inherently uncontrollable and has not been rigorously tested for safety and reliability to develop a vehicle that involves personal safety is inappropriate, or even right. There is a potential threat to social safety," concluded Secretary Adams of the Department of Public Safety.


"Earth Daily"

...The new film "Dr. Strange" shot by the artificial intelligence director robot produced by Fantasy Company has undoubtedly aroused great attention from the society, and both box office and word-of-mouth have achieved impressive results. Impressive results.

…as always, there was a lot of scrutiny about Fantasy’s approach to announcing director “Cisse Finch” as their company’s latest AI-designed AI robot after the release and success of the movie “Dr. Strange.” There was a very polarized and heated discussion.

... When a reporter interviewed the director of "Cisse" on whether his work could be called a work of art, the robot director replied:

I don't know what humans say about the artistry of my work, and I don't care. I am actually reluctant to use the term "artistic" to discuss the story, formal, symbolic, and intentional connotations of my films. If I had to say it, I'd say this movie has an extremely high "Cyt." artistry? That's a human concern, my fellow robot creators and I only care about "Cyt".
CC BY-NC-ND 2.0

Like my work?
Don't forget to support or like, so I know you are with me..

Loading...

Comment