22 April 2024

ChatGPT learns to think like humans, has become smart enough to get a seat in Harvard, Yale

3 min read

ChatGPT learns to think like humans, has become smart enough to get a seat in Harvard, Yale

As impressive as ChatGPT has been so far, it could do well only on language based problems. However, now it has been trained to take on mathematical and logical problems, which makes smart enough that it can beat most students getting into Harvard or Yale

Dans le meme genre : Tim Sekolah F1 India berhasil mencapai Final Dunia di Singapura, disponsori oleh ALP Group

As wonderful and impressive ChatGPT had been so far, it had some major gaps in its ability and knowledge. For example, it did very poorly on mathematical and logic-based questions, while being proficient enough to pass a Wharton MBA exam, as well as the exam that would grant it a license to practise medicine in the United States.

Now though, ChatGPT is proficient enough that it can pass most reasoning and logical questions to get itself a set in some of the top Ivy League schools.

A lire en complément : ED mengajukan lembar tagihan jika terkait dengan pemalsuan barang merek minuman keras ternama

AI learns a human trick
According to researchers, GPT-4, ChatGPT’s advanced AI model, has successfully acquired a form of intelligence known as ‘analogical reasoning,’ previously thought to be exclusive to humans. Analogical reasoning involves solving novel problems by drawing on experiences from similar past situations.

Related Articles

Sam

Sam Altman’s Monster: OpenAI scared of ChatGPT 4 which can now recognise and ‘read’ human faces

Sam

Epic Fail: Even ChatGPT makers can’t tell if text is AI generated, shuts down its detector

During a specific test that evaluates this type of reasoning, GPT-4 outperformed the average score achieved by 40 university students in the AI language program.

The development of human-like thinking abilities in machines has garnered significant attention from experts. Dr Geoffrey Hinton, a prominent figure in AI, has expressed concerns about the potential long-term risks of more intelligent entities surpassing human control.

Some issues persist, but for how long?
However, many other leading experts disagree and assert that artificial intelligence does not pose such a threat. A recent study emphasizes that GPT-4 still struggles with some relatively simple tests that young children can easily solve.

Nevertheless, the language model displayed promising capabilities, performing on par with humans in tasks such as pattern detection in letter and word sequences, completing linked word lists, and identifying similarities in detailed stories. The most remarkable aspect was that it accomplished these tasks without specific training, appearing to utilize reasoning based on unrelated previous tests.

Professor Hongjing Lu, the senior author of the study from the University of California, Los Angeles (UCLA), expressed surprise that language-learning models, originally designed for word prediction, demonstrated such reasoning abilities.

GPT still relies on text to process problems
During the study, GPT-4 demonstrated its superiority over the average human in solving problems inspired by Raven’s Progressive Matrices, a test that involves predicting the next image in complex arrangements of shapes. To make this possible, the shapes were converted into a text format that GPT-4 could comprehend.

Furthermore, GPT-4 outperformed school students in a series of tests that required completing word lists, where the first two words were related, such as ‘love’ and ‘hate’, and it had to predict the fourth word, ‘poor’, by identifying its opposite to the third word, ‘rich’.

Remarkably, GPT-4’s performance on these tests surpassed the average scores of students applying to university.

The study, published in the journal Nature Human Behaviour, aims to explore whether GPT-4’s capabilities reflect mimicked human reasoning or if it has developed a fundamentally distinct form of machine intelligence.