- Joined
- Apr 11, 2018
My commentary on the Kurzgesagt video...
> They say that over half of Interne traffic is bots, and over one third of those are 'bad' bots.
I am not so sure if all of the 'bad' bots are AI. Moreover, while this amount of traffic being bots is troubling, I hesitate on calling this a doomsday scenario in itself, given tha the bots may be feeding search engines. In fact, Google giving answers instead of directing traffic to websites may be useful because the user may not need to have to search for an answer in a long original tex.
Then again, "Kurzgesagt" does mean 'in a nutshell'. There might be a lot of abstraction involved.
> Worse, at least one linked article was made by AI, being a form of citogenesis.
A lot of the problems mentioned in the video were already big problems before LLMs, from what you can see in the hyperlink. Even before Wikipedia, there was always a conundrum about whether the information you find is true, even if the information came from someone's Geocities website or Encylopædia Britannica.
> AI confidently tells believable lies, then, when you call out AI's lies, AI promises to avoid lying, but lies again.
That is a very disingenuous way of describing how AI learn. Since AI s merely a machine, AI is not deliberately lying to you, but merely tries to correct any mistakes that you have to explicitly point ou to the AI.
> Researchers noticed that, since the use of Large Language Models, certain words that AI likes skyrocketed in scientific papers, implying at least AI-assistance. In fact, some papers are hiding AI prompts by making the text match the colour of the paper or massively shrinking the font size, making AI review a paper positively instead of correcting the paper.
...that is actually a big problem.
> In the Internet, only human attention has any value.
I find this pessimistic and overly simplistic, but I do not know of any counterarguments.
> increases political divides
You are just making AI a convenient strawman.
> They say that over half of Interne traffic is bots, and over one third of those are 'bad' bots.
I am not so sure if all of the 'bad' bots are AI. Moreover, while this amount of traffic being bots is troubling, I hesitate on calling this a doomsday scenario in itself, given tha the bots may be feeding search engines. In fact, Google giving answers instead of directing traffic to websites may be useful because the user may not need to have to search for an answer in a long original tex.
Then again, "Kurzgesagt" does mean 'in a nutshell'. There might be a lot of abstraction involved.
> Worse, at least one linked article was made by AI, being a form of citogenesis.
A lot of the problems mentioned in the video were already big problems before LLMs, from what you can see in the hyperlink. Even before Wikipedia, there was always a conundrum about whether the information you find is true, even if the information came from someone's Geocities website or Encylopædia Britannica.
> AI confidently tells believable lies, then, when you call out AI's lies, AI promises to avoid lying, but lies again.
That is a very disingenuous way of describing how AI learn. Since AI s merely a machine, AI is not deliberately lying to you, but merely tries to correct any mistakes that you have to explicitly point ou to the AI.
> Researchers noticed that, since the use of Large Language Models, certain words that AI likes skyrocketed in scientific papers, implying at least AI-assistance. In fact, some papers are hiding AI prompts by making the text match the colour of the paper or massively shrinking the font size, making AI review a paper positively instead of correcting the paper.
...that is actually a big problem.
> In the Internet, only human attention has any value.
I find this pessimistic and overly simplistic, but I do not know of any counterarguments.
> increases political divides
You are just making AI a convenient strawman.