Skip to main content

New Version of ChatGPT Based on GPT-4 Fires Up:

U.S. artificial intelligence research lab OpenAI released its latest GPT-4 language model for its popular chatbot ChatGPT on Tuesday, just four months after ChatGPT went live.

Compared to the GPT-3.5 model originally used by ChatGPT, GPT-4 brings ten major improvements in terms of accuracy and other features, but still suffers from errors, “illusions” and other shortcomings, according to the foreign media.


The following are the main improvements and shortcomings of GPT-4 summarized by foreign media.

More accurate
Chris Nicholson, an artificial intelligence expert and partner at venture capital firm Page One Ventures, told GPT-4 that he is a native English speaker and does not speak Spanish. He wanted GPT-4 to give him a syllabus that would teach him the basics of Spanish. As a result, GPT-4 provided a detailed and well-organized syllabus. It even provided extensive tips for learning and memorizing Spanish words, although not all of the suggestions were pertinent.

Improved accuracy

Greg Brockman, president and co-founder of OpenAI, demonstrated how the system can describe images from the Hubble Space Telescope in great detail. It can also answer questions about the images. If given a photo of the inside of a refrigerator, it can suggest a few meals to make with what’s on hand.

Good at standardized tests
OpenAI says the new system can score in the top 10 percent or so on the Uniform Bar Exam (UBE) in 41 U.S. states and territories. It can also score 1300 out of 1600 on the SAT and 5 out of 5 on the college biology, calculus, macroeconomics, psychology, statistics and history prerequisite exams taken by high school students, according to the company’s tests.

Not good at discussing the future
While this new robot seems to be able to reason about what has happened, it is not so good when asked to hypothesize about the future. It seems to draw on what others have said rather than creating new speculations.

Still creates illusions
New robots still make things up. This problem, known as AI “illusion,” plagues all leading chatbots. Because the systems don’t know what’s true and what’s not, they can generate completely incorrect text. When asked to provide the address of a website describing the latest cancer research, it sometimes generates a non-existent Internet address.

Comments

Popular posts from this blog

AI and Discrimination: Understanding the Problem and Solutions

  Artificial Intelligence (AI) is a rapidly growing field that has brought about numerous benefits, such as improved efficiency and accuracy in various industries. However, with the increasing use of AI, there are growing concerns about the potential for discrimination problems. In this blog, we will explore the various ways in which AI can perpetuate discrimination and what can be done to mitigate these issues. What is AI Discrimination? AI discrimination refers to the use of AI algorithms that result in unfair or biased outcomes. AI algorithms are programmed to learn from historical data, which can include human biases and prejudices. As a result, AI systems can reflect and even amplify these biases, perpetuating systemic discrimination against marginalized groups. Types of AI Discrimination There are several ways in which AI can discriminate against individuals or groups. Some of the most common types of AI discrimination include: Racial Discrimination AI systems can perpetuate...

How Responsible AI is Changing the Game for Gen Z

If you're a Gen Z, your part of the generation that has grown up in a world where technology is an integral part of everyday life. From smartphones to social media, it's hard to imagine a world without the conveniences of the digital age. But with the benefits of technology come new challenges, and one of the biggest issues facing Gen Z today is the ethical use of artificial intelligence (AI). Responsible AI is a concept that is gaining traction as people become more aware of the potential risks associated with AI. In this blog post, we'll discuss what responsible AI is and how it can benefit Gen Z specifically. What is Responsible AI? Responsible AI refers to the development and deployment of AI systems that are ethical, transparent, and accountable. This means that AI systems should be designed with human values in mind and should not cause harm to individuals or society as a whole. Responsible AI also means that the decisions made by AI systems should be explainable an...

Power of Autonomic Computing: The Future of Systems Thinking

Autonomic Computing, a groundbreaking initiative spearheaded by IBM, marked the dawn of a new era in the design and operation of computer systems. The idea was simple yet profound: create systems that could manage themselves with minimal human intervention. Inspired by the human autonomic nervous system, which functions independently to regulate essential body functions, IBM's Autonomic Computing Initiative (ACI) aimed to develop computer systems capable of self-management—systems that could autonomously configure, optimize, protect, and heal themselves. This vision has laid the foundation for what is now recognized as autonomic computing, a vital concept in the modern landscape of artificial intelligence (AI) and machine learning (ML). The Genesis of Autonomic Computing: From Inspiration to Implementation The concept of autonomic computing was deeply inspired by the human nervous system's ability to perform complex tasks independently. Just as the autonomic nervous system regu...