Instagram is currently hard at work developing a unique feature that will be known as “AI Friend,” and it will provide users the ability to personalize their interactions with an artificial intelligence system. An expert in app analysis named Alessandro Paluzzi has provided us with photographs that demonstrate this new function by leaking them to the public. This provides us with a glimpse into the future.
The “AI Friend” function of Instagram is designed to make users’ interactions with various forms of artificial intelligence more straightforward. This involves doing things such as responding to questions, finding solutions to problems, and providing advice, among other things.
A Friend That Can Be Customized
According to a report by TechCrunch, the highly anticipated feature that is driven by artificial intelligence consists of three important elements:
Options for Personalization: Users have the opportunity to select the gender, age, race, and personality of their virtual AI companion. Other options include the ability to change their AI’s appearance. Characters that could be present are those that are “introverted,” “excited,” “imaginative,” “humorous,” “realistic,” and “uplifting.”
Conversations and Interests: Users can give their artificially intelligent buddy more uniqueness by identifying the friend’s interests. This has an effect on the friend’s demeanor as well as the manner in which he speaks. There are a wide variety of interests to choose from, some of which include “DIY,” “animals,” “career,” “education,” “entertainment,” “music,” and “nature,” among others.
Name and Appearance: Users have the option of providing their Instagram AI Friend with a name and an avatar in order to better personalize their experience with the platform.
After complete making these adjustments to their AI partner, Instagram users are taken to a chat window where they may begin interacting with their AI companion.
Instagram has not yet made a public announcement on the availability of this feature. It is essential to keep in mind that unreleased features are subject to modification while they are in the process of being created.
Read More: The EU prohibits Meta from using personal data for targeted ads.
Instagram AI Friend Also Pose Serious Risks
This AI chatbot that is being referred to as a buddy has a lot of potential, but there is also the possibility that it could have some negative effects.
Julia Stoyanovich, who is the head of the Center for Responsible AI at New York University, calls attention to the fact that users can mistake AI discussions for actual human communication. This demonstrates the need to maintain transparency in interactions between humans and AI, as closed communication might leave both parties susceptible to being manipulated.
We are persuaded to believe that the item that is located at the other end of the line is communicating with us. The fact that it is empathetic. We expose ourselves to the possibility of being taken advantage of or let down by it by allowing ourselves to become open to it. According to an article published by TechCrunch, Stoyanovich was reported as saying that this is one of the clear risks associated with anthropomorphizing artificial intelligence (AI).
According to The Japan Times, a study that was published in the American Journal of the American Medical Association’s Internal Medicine Journal found that chatbots are capable of demonstrating very high levels of empathy and responsiveness. According to the findings of the study conducted in 2023, AI chatbot responses were rated significantly higher for quality and empathy than those provided by actual doctors. Although artificial intelligence has the potential to help answer questions and provide assistance, its primary purpose is not to take the place of humans.
According to a report from Reuters that was published in September, Meta Platforms used public Facebook and Instagram postings as a source of training data for their virtual assistant known as Meta AI.
During Meta’s yearly Connect conference, in which artificial intelligence was the primary focus of the conversation, this problem was brought up. However, a number of security and privacy concerns have been raised by industry experts in relation to applications of AI chatbots. These concerns are particularly pertinent in view of the possibility of dangerous cyberattacks and sensitive personal data.
This new turn of events takes place against the backdrop of the growing controversy around AI chatbots. Concerns regarding the ethical implications of AI chatbots have been brought to light throughout the year by a number of events in which the technology was found to incite destructive behavior.
Even though AI chatbots present a number of benefits, the safety of users and transparency of interactions should remain to be the primary concerns.
Read More: Step into the Metaverse: The Future of Online Interaction in 2023