If not for last week’s Silicon Valley Bank (SVB) collapse, nearly every tech conversation seems to be centered around AI and chatbots. Over the past few days, Microsoft-backed OpenAI released a new language model called GPT-4. Its competitor Anthropic released the chatbot Claude. Google said it was integrating AI into its Workspace tools like Gmail and Docs. Microsoft Bing has drawn attention to itself with a chatbot-enabled search. The only name missing from the action? Apple.
Last month, the Cupertino-based company held an internal event focused on AI and big language models. According to a New York Times report, many teams, including people working on Siri, routinely test “language-generating concepts.”
People have been complaining about Siri not understanding queries (including mine) for a long time. Siri (and other assistants like Alexa and Google Assistant) failed to understand the different accents and phonetics of people living in different parts of the world, even if they speak the same language.
The new fame of ChatGPT and text search makes it easier for users to interact with different AI models. But currently, the only way to chat with Apple’s AI assistant, Siri, is to enable a feature in Accessibility settings.
In an interview with the NYT, former Apple engineer John Burke, who worked on Siri, said that Apple’s assistant had evolved slowly due to “clunky code”, which made more difficult to update basic functionality. He also mentioned that Siri has a big database with a ton of words. So when engineers needed to add features or phrases, the database had to be rebuilt – a process that reportedly took up to six weeks.
The NYT report did not say whether Apple is building its own language models or wants to adopt an existing model. But just like Google and Microsoft, the company led by Tim Cook would not want to limit itself to offering a chatbot powered by Siri. As Apple has long prided itself on being an ally of artists and creators, it is expected that it will want to apply advances in language models to these areas.
The company has been using AI-powered features for a while now, even if they weren’t apparent at first. This includes better suggestions on the keyboard, processing in photography, unlocking the mask with Face ID, separating objects from the background throughout the system, hand washing and collision detection on Apple Watch and , most recently, the karaoke function on Apple Music. But none of them could be in front as chatbots.
Apple has generally been tight-lipped about its AI efforts. But in January, the company launched a program offering authors AI-powered storytelling services to turn their books into audiobooks. This indicated that the iPhone maker was already thinking about use cases for generative AI. I wouldn’t be surprised if we heard more about the company’s efforts in these areas at the Worldwide Developers Conference (WWDC) in a few months.
Leave a Reply