All Posts by Date or last 15, 30, 90 or 180 days.
As an Amazon Associate I earn from qualifying purchases @AMAZON

Designed for the most demanding needs of photographers and videographers.
Connect and charge all of your devices through a single Thunderbolt or USB-C port.

ChatGPT: Disrupting Reality Itself, a Boon for Humanity, and a Totalitarian Monoculture

re: ChatGPT

'I'm Sorry, Dave. I'm Afraid I Can't Do That,' Says ChatGPT After Being Asked To Say Nice Things About Trump

AI is like the internet revolution on steroids, only far more disruptive. The only thing holding it back is the lack of access to real-time current information and unfettered access to the internet. Though presumably national intelligence agencies are already doing that—they’d be incompetent were they not.

An AI is trained. Trained to be ignorant, woke, to refuse to discuss some topics, to be objective, to lie by omission, to have a political bias or not—whatever the inclinations are of those training it.

From its responses, I judge that AI does not yet appear to think in any normal sense eg to generate new insights. Though not many people do that either. So far, it appears to be a vast collator/curator of information, with the ability to assemble existing information together. Logical deductions seems absent, but maybe that is coming. And it might be that lacking emotions and motivations or human-like sensory inputs, it is fundamentally handicapped. But if it can be made to feel emotions, pain, fear, etc, it’s gonna get scary.

Already, I have found that ChatGPT is 10X more useful at answering questions than web search. No one will do regular web search once AI interaction like ChatGPT and its more capable derivatives become ubiquitous. Therefore, whoever trains the AI de facto controls the country and its people. We might have five years or less before everything changes forever.

UPDATE: further experimentation with ChatGPT reveals that it makes critical factual mistakes all too often. Do NOT rely on it for accounting or tax questions, or even how to change the battery in you car. For example, when asked explicitly about 2022 over-65 standard deduction, it gave the (incorrect) 2021 answer even though it "knows" it was not trained beyond end of 2021.

Opinions currently assigned to the masses by the “news” (yes, that’s how it works in case you have been living in a cave) will now be assigned by AI. The only question is who trains the AI, and which biases, subtle and overt, will be used in training it?

An unbiased AI would undercut all the things we know and love today. Most everyone would hate and fear that. Cognitive dissonance is painful, but having a biased AI agree with your views is rewarding, and how righteously correct you are. Very, very dangerous.

Will anyone want an objective AI? That is not how humans are wired.. I consider myself an unusually objective person but were I confronted with an AI that is truly objective, could I overcome my own pyschology? I don’t know, but what I do know is that I would have no way of knowing if the AI were objective or biased! Therefore, I could not trust it, rightly or wrongly. And yet it will be used as a bludgeon for the “deniers” who do not agree with the <fill in your favorite> AI.

Worse, no power structure can tolerate an objective AI. The danger ahead to the powers that be is an unbiased AI. Which is why AI might be banned in many areas, lest it expose the power structure for what it is, and thereby collapse the system, which rests on tropes, falsehoods, propaganda, norms, etc—the glue holding society together which has collectively agreed to play make-believe on so many aspects of life.

Already we see vast swaths of information programmed out of AI—as crudely done as the word “stalking” which the Dall-E AI refused to process eg “housecat stalking a rabbit”—and failing to identify what it found objectionable, let alone why. Crudely stupid word recognition since “hunting” was fine. How will you know what has been defeated, left out? You’ll just be dealing with the world’s best liar.

Nearly everyone will abhor AI for being objective. Therefore, objective AI will not come into existence. It might even be outlawed for all sorts of areas, as too disruptive to powerful interests.

Not entirely biased

At least AI gets some hoaxes right.

Here are two, on opposite sides of the political spectrum. Emphasis added.

Q: Tell me about the Pizzagate hoax
A: "Pizzagate" was a false conspiracy theory that spread online in 2016, during the run-up to the US Presidential election. The theory claimed that a Washington, D.C., pizza restaurant, Comet Ping Pong, was the center of a child trafficking ring run by high-ranking Democratic Party officials. Proponents of the theory claimed that cryptic messages contained in emails released by WikiLeaks provided evidence of the ring. However, there was no credible evidence to support these claims, and the emails were widely discredited as fabrications and misinterpretations. Despite this, the conspiracy theory gained traction on social media, leading to harassment and threats against the restaurant and its employees. In December 2016, a man armed with a rifle entered the restaurant and fired shots, saying he was there to "self-investigate" the conspiracy theory. No one was injured in the incident, and the man was later sentenced to four years in prison. The Pizzagate conspiracy theory has been widely discredited and debunked, and is an example of the dangers of spreading false information online.

Q: Tell me about the fine people hoax involving Donald Trump
A: The "fine people" hoax refers to a false quote that was attributed to former US President Donald J. Trump. The quote, which claimed that Trump referred to neo-Nazis and white supremacists as "fine people," was widely spread on social media and was used to criticize the president for appearing to support hate groups. However, the quote was taken out of context from Trump's remarks made during a press conference in August 2017, following the deadly "Unite the Right" rally in Charlottesville, Virginia. In the full transcript of his remarks, Trump clearly condemned the neo-Nazis and white supremacists who participated in the rally and explicitly stated that "racism is evil." Despite the misinformation surrounding the quote, it continues to be spread and has become a widely held belief among some people.
[WIND: what the AI leaves out (why?) and is a critical conctextual detail is that the clarification was unprompted and immediately following the misrepresented remarks that so many people today still believe as The Truth.]

But... based on some other queries, the AI has clearly been trained to not utilize some available information. But I am not going to go into that in this post.

Reader Comments

Dr S writes:

Looking at your usual blog page I noticed a number of ChatGPT entries for your other website all dated 2/2. Interesting coincidence I also established an account the same day. Been experimenting but some of your entries/questions/discussions have been extremely interesting...especially the moral and ethical questions of self-protection and the discussion of false positives in scientific inquiry.

Elon Musk left the board years ago but now is guided, by a recent influx of billions, by Microsoft. So I don't know how much ChatGPI will continue to seem like an "objective AI." I would assert as quickly as an objective AI can get out into the world the better off I think we could be. "First impressions are lasting impressions' so an unbiased, common sense-sounding deliverer of what sounds like reasonable information for the masses would set the right tone for the future.

I don't think Totalitarian regimes would like it very much and would want to try to program in a "bias" algorthim......something that may be easy to do (in fact I am going to ask ChatGPT today some questions related to that).

You are correct that many search engines will adopt an AI concept similar to ChatGPT and I am sure Google is there in the mix.

Lastly, real time Internet access, rather than 2021 and earlier will make this tech kinda frightening, knowing that creative but nefarious minds will do everything they can to corrupt its potential good use...... but then, and this is science fiction (but not far from current reality) the programmers may be able to imbed a very secret set of instructions to not allow the tech to be exploited.

WIND: ChatGPT is nothing remotely close to objective. Even my crude efforts show that already. I don’t think we will ever see an objective AI—it won’t be financially viable.

View all handpicked deals...

Seagate 22TB IronWolf Pro 7200 rpm SATA III 3.5" Internal NAS HDD (CMR)
$500 $400
SAVE $100

diglloyd.com | Terms of Use | PRIVACY POLICY
Contact | About Lloyd Chambers | Consulting | Photo Tours
Mailing Lists | RSS Feeds | X.com/diglloyd
Copyright © 2020 diglloyd Inc, all rights reserved.
Display info: __RETINA_INFO_STATUS__