Can highly emotional AI make people smarter?

There is a simple but inescapable and unavoidable question in economics: what should people do when they behave irrationally? This seemingly simple question is the main cause of the behavioral revolution in economics.

With the rapid advancement of technology, this problem is beginning to plague the field of technology. The cyber world was once thought to be a world where all kinds of information were readily available and where it was easy for everyone to work together. Of course, in the online world also lies and hatred spread faster than truth and goodness. There are also company systems that can trigger irrational behavior in employees. For example, when forecasting sales, employees often hide bad deals and selectively report good ones.

AI now stands at the crossroads of behavioral issues, and it has the potential to make people's behavior worse, but it also has the potential to make it better.The key to getting good results from AI is to improve its emotional quotient - also known as EQ.So how can you improve its emotional intelligence? One way is to use the law of multiple operations to mimic the way humans behave in relationship building.

Whether we recognize it or not.Nowadays we have various relationships with multiple applications.And apps like people can make positive and negative behaviors that come from ourselves.

When highly emotionally intelligent people interact with us, they know our patterns, understand our motivations, and carefully weigh how they themselves should react. Is it to ignore us? Challenge us? Or encourage us? It all depends on how they anticipate how we will react.

Artificial intelligence can be trained to do the same thing. Why? Because these behaviors are more predictable than we think. The $70 billion market cap weight loss industry thrives because weight loss companies know that most people will bounce back. The $40 billion market capitalization gambling industry knows that gamblers will come back with a single delusion. They profit from this illogical idea of gamblers. Banks offer credit cards because they know it's hard for people to change their spending habits.

While it's still early days to get there, the field of behavioral science and the field of machine learning have provided some promising techniques for creating highly emotionally intelligent AI. Many companies are also working on creating better products for this purpose. These promising technologies include.

1. Note the breaking of habitual patterns and make reminders.

In reality, people who know you well can often easily tell when your usual patterns are broken, and they will react to the break in your pattern accordingly. For example, your friend notices a sudden change in your daily habits and will ask you why you're different than you were. Bank of America's online bill pay system uses this similar pattern to prevent user input errors. The system remembers how you've paid in the past. If one day you significantly increase your payments to a vendor, the system sends out a warning.

2. Use benchmarks to encourage employees to self-reflect.

Bluntly pointing out that others are performing poorly can often add fuel to the fire, which is not effective and can be counterproductive in terms of provoking their resistance. A more flexible approach is to let them see for themselves how they compare to others. For example, a large tech company uses AI to make more accurate sales forecasts than the sales team. To correct team members' perceptions, the system provides each team member with a personalized visual display showing the difference between their own predictions and those of the AI. This simple action reflects the reason why the employee is in this situation. Employees can provide reasonable explanations for this, avoiding hypothetical comments or claiming that the AI's information is incorrect. The AI understands the substance and timing of the individual's response, weighs the gap between the two predictions, and then chooses an appropriate secondary pusher.

3. Use game theory to accept or challenge conclusions.

Imagine if you were on a team that had to identify more than 100,000 errors in fund trades every day. One fund with trillions of dollars in assets under management is using artificial intelligence to solve this daunting problem. The initial version of this AI calculates potential errors (also known as "anomalies") by risk and potential cost, and then identifies the most dangerous anomalies first.

The system then tracked how much time analysts spent on exceptions. The assumption was that analysts would spend more time on risky anomalies and less time on the "easy stuff." In reality, however, some analysts skimmed over the riskiest anomalies, drawing questionably quick conclusions.

False alarm rates tend to be very high in many very large vetting systems. For example, a secret team at the U.S. Department of Homeland Security found that Transportation Security Administration 95% screeners failed to stop the smuggling of weapons or the presence of explosives through vetting. Analysts at the International Monetary Fund scrutinize countless transactions, and they, like TSA screeners handling thousands of passengers, keep their eyes open but always turn a blind eye to anomalies.

The IMF is using an algorithm used in chess programs to deal with this dangerous behavior. This modified version of game theory first monitors whether the analyst will consider the anomaly to be an artifact and whether he will decide whether to spend more time on the anomaly. The AI machine playing the role of the opponent in chess can then decide to counter by accepting the analyst's decision or challenge.

4. Choose the right time for insight and action.

By any standard, Jeff Bezos is a master decision maker. In a recent interview with Bloomberg TV's David Rubenstein, Bezos described his decision-making framework. When asked about a complex decision in the late afternoon, he usually responds with this:- "That doesn't sound like a 4 p.m. decision; it sounds like a 9 a.m. decision." So time is important.

My company's sales team, Team A and Team B, tested the most appropriate and effective time of day to respond to prospect emails and found that there was a big difference in response rates between emails sent on Tuesday mornings and Friday afternoons.

Many consumer information systems are tuned to maximize returns. Optimization algorithms can improve the types of decisions consumers have to make and the tendency to make better choices. For example, decisions that take time to think about can be made when the decision maker has enough time, so that they are either adopted or included in the plan.

Can Highly Emotional Artificial Intelligence Bring More Civility to the Internet? Social media companies would do well to consider a distinction that Western businesspeople learn when negotiating with their Japanese counterparts - "honne" (inner feelings) and "tatemae " (publicly expressed feelings). Understanding the difference between how a person feels and what he wants to say can save a lot of miscalculations.

Algorithms based on this distinction may be developed, as this helps to address the predictable tendency of people to hesitate to say and do what they want to do under the influence of others (even virtual crowds). People who want to write inflammatory, misleading, or vulgar posts might be urged by AI to reorganize their language, or to take note of those keyboard warriors in trending topics.The challenge of developing such emotionally charged, highly emotive AI is daunting, but rather than simply deleting individual posts, it's better to get to the root of the problem.

Posted by Anvon, please cite the source when reprinting or quoting this article:https://anvon.com/en/44.html

Like (0)
Anvon's avatarAnvon
Previous July 1, 2019 4:31 pm
Next July 1, 2019 4:45 pm

Recommended

Leave a Reply

Your email address will not be published. Required fields are marked *