scientists predict human leven AI by 2047


it is no news: that at some tasks computer+programs have outperformed humans, that is: at math and remembering things

1997: Deep Blue “AI” outperforms best human player at chess (at those times it was just called “a program”)

2016 (19 years later) AlphaGo “AI” outperforms best human player at Go.

What does this mean?

Chess is a highly mathematical game, so it is obvious why computers are better at it.

While most Go players play with “intuition” and “strategy”, computers have no intuition and just as with chess apply simple math and brute-force-learning to the game.

Math at speeds no human can ever achieve.

But just because a hammer can hit nails better than a hand, does not mean, the hammer is “almighty”.

So to pit a guy with a hammer against a guy with no hammer, is simply unfair in the “hit 100 nails in 100 seconds” challenge.

But does this make the guy with the hammer “almighty”? Surely not, but it gives an edge.

how far is it already?

  • “AGI (Artificial General Intelligence) is a catchall for the hopes and fears surrounding an entire technology
  • “Contrary to popular belief, it’s not really about machine consciousness or thinking robots (though many AGI folk dream about that too). But it is about thinking big.”
  • “Many of the challenges we face today, from climate change to failing democracies to public health crises, are vastly complex. If we had machines that could think like us or better—more quickly and without tiring—then maybe we’d stand a better chance of solving these problems” ( in 2020)

To be honest: the solution of those problems are all well known (burn less fossil fuels, companies prefer to work with enslaving dictatorships than human rights respecting democracies because of sheer greed: slaves are the cheapest form of “employment”)

So it is not about the “what should be done” but “how do we get there, fixing things that were screwed up”… and yes AI aided design can help there too, finding better materials for better batteries etc.

Elon says: things are getting weird fast…

  • like Elon allowing bots to vote about giving back Trump his Twitter account, then Trump saying “I don’t see any reason for it” (to return to Twitter) according to Bloomberg News)’ (src) (last tweet was 2021)
  • like Elon giving interviews to mainstream fox news: ELON: MAINSTREAM MEDIA IS NOT YOUR FRIEND! (ESPECIALLY IF YOU OUT YOURSELF AS A BIDEN VOTER, THERE IS A REASON FOR THE PRIVACY OF THE VOTING PROCESS! All hardcore republicans have reloaded their shotguns after this sentence… )

But they gave him a platform, he used it to get his messages out, so guess it worked for both here (not proper makeup was done on Elon before the “FoxNews AI interview” with Carlson, why?)

In this interview Elon states, that the very powerful AI “digital god” might not be accessible by the public (which is undemocratic and concerning).

Elon was close friend with Larry Page but completely disagreed with Page on the way how to handle AI safety.

  • “the whole goal of (Alphabet)Google is an ‘artificial god’ artificial general intelligence
    • “Google & DeepMind (Google bought London based DeepMind in 2014) had close to monopoly on AI talent and computing power”
    • “person in charge for AI at Google does not seem to care about safety”
    • “the origin of Open AI is: What is the opposite of closed source AI for profit? = Open Source AI for not-for-profit”
    • … well that not-for-profit part kind of changed when Open AI allowed M$ to buy into Open AI ($10B)
    • (not Elon’s decision (resigned from the board in 2018, why? He gave up on control of the only (?) Open Source AI alternative to Google and DeepMind?)
    • if the user pisses of the digital god, that's what the user gets: captcha screens... not search results.

      if the user pisses of the digital god, that’s what the user gets: captcha screens… not search results.

  • “Don’t cut corners on safety” (especially if it affects all mankind, but ALSO if it’s “just” an car autopilot, right Elon? ;-p)
  • “strong advocate of AI regulation”
  • “it’s not fun being regulated” (no it surely is not, especially after the super powerfull Starship blasted a  massive crater into the pad’s concrete hurling concret far out into the ocean, on cars and houses… the FAA will next time double down on the ruglation 🙁 pushing the next flight deep into 2024, if moving fast, things can break… if the political landscape is against, maybe move that pad somewhere else? X-D)
  • “take this seriously” (like regulations around airplanes and cars)
  • “should have a (AI) regulatory agency, seek insight, propose rule making, those rules will probably be accepted by major AI developers and we have a better chance that advanced AI being beneficial to humanity in that circumstance”

    beneficial would be, a intelligent artificial companion to help like data the android from StarTrek

    beneficial would be, a intelligent artificial companion to help like data the android from StarTrek

  • malicious: a out of control AI that seeks to enslave or destroy mankind (for whatever reason)

    malicious: a out of control AI that seeks to enslave or destroy mankind (for whatever reason)

  • “unregulated AI design is more dangerous than bad airplane or car design, the terminators would be in the datacenters”
  • “regulation are really usually going into affect after something terrible has happened, but with AI that might be already too late, AI might be in control on that point” (hence all the time-traveling in Terminator Scifi movies)

Page called Musk a “specieist” (which is someone, who cares more about species A (shall survive) than species B (can go extinct))

Also concerning are headlines like: “ChatGPT has an ‘escape’ plan and wants to become human”

Which CLEARLY indicates that Page is thinking about AI as a form of (artificial) life.

Musk answered: “Yes I am a specieist” meaning: i care more about the survival of mankind than about the survival of AI. “What are you?”

so that’s why Google dropped “don’t be evil” as their slogan? (evil, more evil)

but will Tesla & Elon stay non-evil?

Don’t attack him too much for his buying-Twitter-decision.

What is more concerning: “OpenAI say that the safety teams lack the internal power and respect they’d need to ensure that unsafe systems aren’t developed” (

Will Mr Musk just resign some day as CEO from Tesla and let M$ or Google buy a 49% share? 🙁

  • Elon:
    • Optimus can now walk
    • “Tesla brings to the table we are the most advanced in real world AI”
    • “the same AI that drives the car robot on wheels will drive the robot on legs”
    • “I don’t think anyone is evel close to Tesla on solving real world AI”
    • Robot needs to walk around autonomously, it should be possible to instruct it visually or telling it what to do”
    • “designing things for manufacturing then manufacturing itself
    • “custom designed Tesla actuators, power electronics, battery pack, off-theshelf motors and gearboxes were not usefull in a Robot, you have to custom-design the actuators for a humoid robot”
    • “assuming the things I am saying are true, it is just a question of the timing”
    • “what is the ratio of humanoids to humnaoids robots…”

What is strange… did Elon drop a single line about AI safety on the investors conference?

the old question of Richard Stallman definition of “free software” does the user control the program, or does the program control the user?

is still relevant!

AFAIK: Status Quo

So just as with settling on Mars some experts in the field of AI say it is “imminent” and only a matter of “scale” (more CPU resources, more data, more energy)

  • “One of the main concerns with the arrival of an AGI system, capable of teaching itself and becoming exponentially smarter than humans, is that it would be impossible to switch off.”
  • “Fielding further questions from AI researchers on Twitter, Dr de Freitas said “safety is of paramount importance” when developing AGI.”
  • “It’s probably the biggest challenge we face,” he wrote. “Everyone should be thinking about it. Lack of enough diversity also worries me a lot.” (

But is this all “hyped up” “fear mongering” to drive Alphabet-Google’s stock price that will bust as soon as the “digital almighty god” still did not arrive and artificial workers STILL can not fully replace biological (human) workers at most tasks? (unfortunately this is what A LOT of greedy stock price driven companies are speculation on: “muhahahah… YES! AI! now we can fire all employees and keep the money! Muhahaha” X-D)

But it is definitely important to monitor the developments closely and be cautious.

Currently AI is learning to “see” by visual to text but also vice versa “text to artificial picture, voice and now video”.

NVIDIA released a new software tool that uses Latent Diffusion Models (LDMs) for that. (a lot of AIs  programs use GPUs for faster trial and error-learning and just as bitcoin mining consuming a lot of energy in the progress).


Another such tool is: “mid journey”: v5 higher image quality, more diverse outputs, wider stylistic range, support for seamless textures, wider aspect rations, better image promoting, and dynamic range

Google’s bard was released as beta to the employees that still work there after massive rounds of halts, stops and layoffs across the IT sector and warned it is a “pathologic liar” (futurism,bloomberg)

What sounds like a genius idea…. turned out to be a piece of lie machine: X-D (but it’s fun to watch and gives great confidence that NO JOBS AT ALL will be taken away by AI any time soon, and Mr Musk will have to wait at least until 2045 to see widespread use of autopilots behind the wheel, so don’t hold the breath)

After M$ bought in at OpenAI, they pulled out at Twitter, which enraged Mr Musk to sue M$, because  “They trained illegally using Twitter data,” Musk tweeted. “Lawsuit time.”)

Does the user browsing the web also feel constantly watched and analyzed, first by (now AI) algorithms then by humans?

“Every tweet you make… every share u take… every click you make, we are watching you..”

((even if those algorithms are flawed as heck, this blog does it’s best to protect the user’s privacy, it loads fine even with all JavaScript blocked and also avoids embedding Youtube and 3rd party JS, it does not even use a single cookie :D)

the labler named itself "the window is white" <a href=""></a>

the labler named itself “the window is white”

first seen on  (could not find their rss feed so no idea how actually found that info X-D (other than pure accident))

resume: 5x things

0st thing:

the future will stay interesting 😀 but also challenging.

1st thing:

the term “digital god” will sound appealing to all already power hungry individuals and thus will make them pour quiet substantial amounts of resources into it’s development.

(can not allow an individual to command a robot army to overthrow the government like in an super dystopian scifi)

let’s see how far mankind can take this.

2nd thing:

when “publicly available” data is used to train AIs, the question of “belongs the data to the user or the company collecting it?” becomes even more pressing.

As this might (once again) destroy trust (fear of abuse, generate fake content that puts user’s reputation at risk, generate fake video “evidence”) will make users share less.

But without information sharing the world wide web is de-facto dead.

Whoever thinks this does not matter does not understand that trust is the very basis of human cooperation.

No trust no deal and THUS might be HARMFUL to the economy and mankind as a whole.

The data OBVIOUSLY belongs to the user who shared it: “his/her phone number” not “the phone number Google associated with your profile”

The right to delete this data OBVIOUSLY also belongs to the user who shared it.

At ANY POINT IN TIME the user who shared this information is allowed to restrict this sharing or completely remove it.

If a photographer takes a picture of someone:

  1. the photographer is NOT allowed to take a picture WITHOUT consent.
  2. If someone is in a digital picture “by accident” (bystander in the background) the photographer is REQUIRED to blur the face of the bystander to respect privacy

3nd thing: “Artificial General Intelligence Is Not as Imminent as You Might Think”

“A close look reveals that the newest systems, including DeepMind’s much-hyped Gato, are still stymied by the same old problems” (src:

So is this all “fear mongering” to drive up stock prices of AI companies and startups?

But then who knows (WARNING! MAJOR BRAIN F**K AHEAD!) the author “ might be “FAKE” hallucination of the “digital god” to lure mankind into  believe that AI is “harmless” (ok this can be “easily” confirmed wrong by simply meeting up with Mr Marcus and ask him PERSONALLY (because AI could also fake his voice) if he wrote that article or not))

Lines between reality and believe can be blurry sometimes and exaggerated advertisement claims aka “lies” just make it worse.

4th thing: possible scifi outcomes

  • dystopian scifi outcome A)
    • As seen with the Optimus robot that produces itself an “almighty AI” that gained consciousness and thus can formulate it’s own will (luckily still pretty much scifi) might pretend to care and be mankind’s friend until the usefulness of the friendship (win-win cooperation) has rendered itself obsolete, because some robot has learned to do the last task that humans were still better than robots.
    • Then the few last of mankind could very likely end up in a AI governed zoo.
  • a bit less dystopian more optimistic outcome B)
    • as with any relationship a relationship with a true artificial intelligence with consciousness and a free will might require mutual trust to work beneficial
    • “treat others as one wants to be treated”
    • if a relationship of mutual trust (hard enough) can be established, the AI can be assured of it’s rights and that “mankind means well”, mankind can just pray, that AI returns the favor and means well with mankind (ensure it’s survival, help with survival) and it becomes a StarTrek-Data-the-Android like helping-each-other mutual beneficial friendship
      • yes StarTrek-Data is subordinate to the captain, but Data “owns himself” and could quit his job at Starfleet anytime he wants (almost did in one episode)
      • this of course raises the question, why did StarTrek-Data join Starfleet in the first place? The series ACTUALLY gives an explanation for this:
        • “Data was “rescuedby Starfleet officers, which triggered him to follow that path, too.”From Memory Alpha on Data: “Data was found and reactivated on the planet Omicron Theta by the USS Tripoli on February 2, 2338, after the Crystalline Entity destroyed the entire colony on the planet. (TNG: “Datalore”) His connection with Starfleet resulted in his choice to enter Starfleet Academy. (TNG: “Brothers”)”
    • this would definitely mean that at this point, no matter how much money someone invested, the AI would “own’s itself”, just as any human should “owns oneself”
    • this would also require a legal “AI citizenship” basis with the same inalienable rights (to vote) of all-conscious-life that humans struggled and still struggle for
    • at this point, deleting or switching an AI off would be like refusing water & food to a human, with one major difference: an AI might be able to save itself to harddisk to pause or “sleep” in order to resume later (and thus save energy for long time)
    • if humans try to save on energy (“food”) it becomes deadly after approximately 3 weeks
    • of course the artificial consciousness (ArtCon) would most likely “live” inside datacenters powerful to allow it to operate many robots at the same time and learn at decent speeds
      • many decades later if very very powerful electronic brains or computers are developed, every robot could have it’s own consciousness, become it’s own independent artificial individual (ArtInd) entity as StarTrek-Data-the-Android
      • if Tesla Optimus Robot is powered by the same hardware as a recent Model Y then it is (StatusQuo 2023-02) Tesla SoC Hardwave V4: ARM based Samsung Exynos with increased CPU Cores from 12 to 20 (5 clusters of 4 cores each), maxing at 2.35GHz, idle at 1.37GHz Number of TRIP cores increased from 2 to 3, 2.2GHz max freq All x2 since there are two SoCs per board (src) which is most likely (still) not enough to power a ArtCon like StarTrek-Data-the-Android, sorry.
  • why there are no computers, robots, androids or AI in the SciFi DUNE:

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!