A Study of AI Products

I wrote about this last time, but I would like to continue a little.

 

 I think we are finally going to see something that will become a part of dispiriting science fiction, such as "According to the AI's answer...", as in the previous article, "The idea of ChatGPT as a "neutral and fair opinion" will emerge in the future".

 

 As I have mentioned before, it is possible that the responsibility will be placed on AI....

 


 Let me introduce a short story by Shinichi Hoshi that left a strong impression on me long ago.

 I think it was something like this if I can reproduce it from memory.


 The supercomputer (now called AI) gives a command that humans don't understand well (to enforce laws and regulations to protect hippopotamuses and to respect them as "Hippopotamus"), and the AI, reluctantly (supercomputer in the expression of the time), obeys the command, so it thoroughly protects the hippopotamuses anyway.


 Human beings follow the law, thinking that it is ridiculous, but after a while, an unknown virus causes the extinction of pigs, chickens, etc. and there is nothing to eat.

 

 However, rhinoceroses are resistant to the virus, so they are fine.


 Now the supercomputer asks, "What shall we do?" The supercomputer then gives the rhinoceros a recipe for a tasty meal.
 After this, the supercomputer succeeds in helping the human to overcome this unprecedented crisis by making full use of its resources.

 However, the supercomputer had a small deviation in its response.

 After the crisis was over, the supercomputer gave the following instruction: "Humans must walk on all fours.


 However, no one notices that the supercomputer has gone a little crazy, and no one questions that it is probably for something in the future...

 I remember that the story was something like this (the details may be a little different).

 I was worried when I wrote it, so I looked up the name and found that it must have been called something like this, and I believe it was here.

 Hippopotamus" (included in the future Sosopu) by Shinichi Hoshi / Shincho Bunko

 There was also an article on AI+virus in Kyoto Shimbun. I guess it is true.
 

note.com


 It is probably this work since it also matches my memory.
 
 I read it a long time ago (Shinichi Hoshi read it in the library when he was in junior high school... so of course I don't have the book itself with me and it's a bit fuzzy), but I somehow remembered it.

 Not as much as Shinichi Hoshi's short shorts, but also a little worrisome.

 Once the AI is right, everyone may be swayed at once to follow the AI's lead.

 But...


 In this sense, I feel that Shinichi Hoshi is still new.

 

 On the other hand, I believe that there will be a trend of answers to unusual opinions that AI does not give, which will probably not be returned by ChatGPT.

 Conspiracy theories are likely to gain momentum in this day and age, and I have a feeling that "secret information is known only to a few" and "truths that AI does not tell" will also become popular.
 I am sure that "AI is a XXX (Trilateral Commission, Council on Foreign Relations, Big Tech, International Jewry, International Finance, International Investment, Rothschild, Occupy, Secret Society, Folly Mason, Infinity, XXX not to be named, etc., etc.), Or some new AI organization, or if you want, the Spaghetti Monster, or whatever. I think we're going to get the tempestuous conspiracy theory that "It's under the control of ・・・・ (there's any number of partial truths, and we can make it look like that while blending it in)" and so on and so forth.

 

 

 Conspiracy theories aside, the more we rely on AI, the more different opinions will become important, but also the more contrarian they will become, opinions that AI can't give, etc., etc., etc... Perhaps there will be a book about it.

 Perhaps such books as "What You Can't Hear from AI " will be published.

 However, I think that the more contrarianism increases, the more it is likely to disturb the information sphere.

 

 →As a result, AI's opinions are likely to be neutral among various opinions, chimerical in that they incorporate a variety of things, but they will settle on the idea that they can be trusted as an outline for the time being.


 It would be difficult to get more diverse opinions from them, and the question is how to incorporate the recipient's side in this mixture of stones and stones...

 I can say that I have more things to think about information.

 In this sense, Japanese academic societies, old media, and some universities, which are losing their role, may become a platform to carry various opinions and ensure quality to some extent, rather than an important resource of knowledge, although some people insist that they are unnecessary.


 I am thinking about this and so on.

 

 In addition, I have written many times about this area↓ and others for quite some time.

 

 

penginsengen.hatenablog.com

 I think that there should be some kind of compensation/guarantee system or a mechanism that allows the original data creator to control in advance whether or not the AI can be trained. We believe that some kind of compensation or guarantee system is needed.
 As has already been reported in the news, there has been a lot of opposition from those on the side of the data creator, and unless this point is firmly established, the development of AI will eventually be adversely affected.
 
 As is the case with moral rights, there is a direction to regard copyrighted works as an extension of oneself, and I personally would like to focus on property rights, but this may not be the case in some cases