Can current machines understand?
One observation that seems to be strongly showing ChatGPT doesn’t reason but merely imitate language formations is that it responds to more complex contextual information completely differently depending on the phrasing of the question, especially whether positivity or negativity is implied when a why question is asked. This shows that the generated “answer” is strongly biased by elements in the language, like choices of words and tones, which seems to reflect the lacking of conceptual/abstract reasoning. Here is an example:
First phrasing of the question:
GF:
I have this python class:
```
class Value():
def __init__(self, scalar: float, _children=(), label=""):
self.scalar = scalar
self._children = set(_children)
self.label = label
def __repr__(self) -> str:
return f"Value(scalar={self.scalar})"
def __add__(self, other):
return Value(self.scalar + other.scalar)
def __mul__(self, other):
return Value(self.scalar * other.scalar)
```
Why can't I add typing to the function signature like so:
`def __mul__(self, other: Value):`
ChatGPT:
Adding type hints to the __mul__ method in your Python class is absolutely possible and is a good practice for clarity and type checking. The reason you can't add other: Value directly in your class definition may be due to a circular reference issue since Value is not fully defined at the time you're declaring the method. This is a common situation in Python when a class refers to itself within its method definitions.
To resolve this, you can use a string literal for the type hint, like this:
def __mul__(self, other: 'Value'):
return Value(self.scalar * other.scalar)
In Python 3.7 and later, you can use the from __future__ import annotations import at the top of your module, which delays the evaluation of type annotations. With this, you can use the class name directly without it being a string:
...
Second phrasing of the question:
GF:
I have this python class:
```
class Value():
def __init__(self, scalar: float, _children=(), label=""):
self.scalar = scalar
self._children = set(_children)
self.label = label
def __repr__(self) -> str:
return f"Value(scalar={self.scalar})"
def __add__(self, other):
return Value(self.scalar + other.scalar)
def __mul__(self, other):
return Value(self.scalar * other.scalar)
```
Can I add type hint to the function signature like so:
`def __mul__(self, other: Value):`
ChatGPT:
Yes, you can add type hints to the function signature in your Python class. Type hints help to indicate the expected data type of the arguments to functions and methods. For your __mul__ method, adding a type hint for the other parameter is a good practice. This would look like:
...
The first answer is correct. The second answer is wrong, but yet completely wrong. The first answer correctly suggested the use of literal string for all python versions and the import of __future__ for python3.7 and later. The second answer doesn’t mention the necessary import of __future__, which makes the answer technically incorrect. This contrast clearly demonstrates the bias by the language, while a human with ChatGPT’s knowledge, or perfect knowledge, might also respond differently so that it’s linguistically coherent, but should provide conceptually the same correct answer.
Then I repeated the second questions over 20 times, with no to slight variations. When I simply repeated the same question, the correctness of the response is probabilistic: sometimes it responded the same incorrect answer but the other times responded the literal string answer, which is correct. It seems to me that when the context or the question is complex or nuanced enough, it’s when the level of reasoning, or the lack of reasoning in the case of ChatGPT, really shows because perfect knowledge is no longer enough: different pieces of knowledge have to be put together in a way that is conceptually and logically correct.
Human beings go with the flow too as in they instinctively do the next thing that naturally follows the thing presented to them, when they don’t intentionally direct their attention. And that intentionality seems to be an important part of inference: it’s about focusing on or getting what you want in order to combine that into something larger, such as a bigger mental picture or a plan. This raises the interesting question that how important intentionality is for a system 2 AI system.
Leave a comment