A new Internet teaching tool reveals our major regrets.
What Is Regret Analysis In Online Machine Learning
By Amanda Mascarenhas
When it comes to artificial intelligence, most of us have a pretty good idea what the implications will be — less natural humans will interact with machines more naturally, drones will be able to aid emergency responders, robots will be able to one day walk the earth, and driverless cars will become the norm. But what about machines themselves? How will they live in the world? And will they evolve? Are their senses or emotions going to increase or decrease, as is currently happening in human interaction?
If you spend any time reading about technology or about artificial intelligence, then you will have heard that people worry about their own mortality or the lives of our robotic companions. We may feel very comfortable with a person-machine hybrid relationship in the office, but just imagine a world where the A.I. is sentient. Is there a fear that our A.I. can ruin everything, or is this mostly concern over when the earliest bodies capable of mimicking human emotions will be produced and the emerging marketing of these A.I. representatives will be available? Or when A.I. thinking and emotions will be incorporated in autonomous vehicles?
Even though we take for granted the idea that we will live with these machines for many decades, the fear persists and as it always does, it is not limited to people. Should we be worried about the machine’s simulation capabilities? What happens to the consciousness? Should we be concerned with the “Robot as Technology” movement? What technologies are starting to emerge as end-of-life solutions? What is our responsibility to the machines? The concerns are constant and intense, but the solution is simple — form a crew of robots and let them die.
In a study in the Computer Society Journal of Human Factors, Amos Kornfeld suggested the following on the topic of regret evaluation.
“… there are several classic techniques that can be used to find and analyze the pattern of neural responses, which are accompanied by the same pattern of emotions, for a variety of problems…”
That is, there are tasks that are tangible and that we can measure objectively. For example, typing a word or taking a math exam with ease is one kind of task that can provide data on working skills. Memory recall tasks are another kind of task with different depth, and everything we are doing on our computers has a degree of significance that can be measured. Humans that chose to be members of a crew and have they passed their tasks and rewards therefore move on and enhance their technology career or other lives’ endeavors. And it is quite understandable, but what happens to the machines?
Kornfeld and his fellow researchers argued that accidents or harm to the machines or the team members themselves is less important in finding the pattern and causes of the reconsideration. In the computer program they used, the goal was to find the context of the context and the nature of the feedback stimuli, and they incorporated this not only into the setup but also into the algorithm.
So, if a crew member misses a trigonometry assignment, why should we fear that? They constructed a decision tree that indicated a possible situation, which was then optimized to predict the outcome. So they use the decisions and conditions that came about during the scenario to forecast an outcome, rather than the emotion of the crew members. It took the researchers a few years to examine the problem to derive the processes that created their algorithms and provide them to us as open source. They would definitely allow people to learn more about this kind of problem, which could be a substantial resource for future engineering, technology, and education needs.