{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://www.pymnts.com/category/artificial-intelligence-2/feed/json/ -- and add it your reader.", "next_url": "https://www.pymnts.com/category/artificial-intelligence-2/feed/json/?paged=2", "home_page_url": "https://www.pymnts.com/category/artificial-intelligence-2/", "feed_url": "https://www.pymnts.com/category/artificial-intelligence-2/feed/json/", "language": "en-US", "title": "artificial intelligence Archives | PYMNTS.com", "description": "What's next in payments and commerce", "icon": "https://www.pymnts.com/wp-content/uploads/2022/11/cropped-PYMNTS-Icon-512x512-1.png", "items": [ { "id": "https://www.pymnts.com/?p=2683559", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/sam-altman-openai-has-reached-roughly-800-million-users/", "title": "Sam Altman: OpenAI Has Reached Roughly 800 Million Users", "content_html": "
OpenAI’s CEO says the generative artificial intelligence (AI) startup has reached approximately 800 million people.
\n\u201cSomething like 10% of the world uses our systems, now a lot,\u201d said Sam Altman, whose comments at a Friday (April 11) TED 2025 event were reported by Seeking Alpha.\u00a0
\nHost Chris Anderson pointed out that Altman had said his company\u2019s user base was growing rapidly, doubling in a \u201cjust a few weeks.\u201d
\nThe report noted that OpenAI\u2019s growth has been helped along by viral features like the ability to generate images and videos in a range of styles, such as that of legendary Japanese animation studio, Studio Ghibli.
\nLast month, Altman said the company, maker of ChatGPT, had added a million users in one hour. Asked during the TED event if the company had considered compensating artists for creating works in their style, Altman said there could be prompts that could trigger payments for specific artists.
\n\u201cI think it would be cool to figure out a new model where if you say, \u2018I want to do it in the name of this artist,\u2019 and they opt in, there\u2019s a revenue model there,\u201d Altman said.
\nAltman added the company had guidelines to prevent the AI model from generating images\u00a0in the styles of specific artists or creators. He also discussed the company\u2019s work on AI agents, models that can operate autonomously on behalf of users.
\nIn other AI news, PYMNTS wrote last week about ways the technology can help companies hoping to alleviate the cost of new tariffs. While those levies will eat into the bottom line of many businesses, AI can help reduce costs while ensuring productivity stays up.
\nResearch by PYMNTS Intelligence has shown that 82% of workers who use generative AI at least weekly say it increases productivity, even though half of these workers also worry that AI would replace them at their jobs.
\n\u201cAI can also facilitate material selection by assessing availability, compliance and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,\u201d said Tarun Chandrasekhar, president and CPO at Syndigo.
\nStill, Pierre Lapr\u00e9e, chief product officer of SpendHQ, told PYMNTS that while AI has a part to play, it\u2019s \u201cmisguided\u201d to believe that AI will automatically offset rising costs from shifts in trade policy.
\n\u201cTariffs are complex, and so is procurement,\u201d he said. \u201cYou need more than an algorithm \u2014 you need clean, structured, specific data. Without that, AI won\u2019t reduce risk. It will amplify it.\u201d
\n\n
The post Sam Altman: OpenAI Has Reached Roughly 800 Million Users appeared first on PYMNTS.com.
\n", "content_text": "OpenAI’s CEO says the generative artificial intelligence (AI) startup has reached approximately 800 million people.\n\u201cSomething like 10% of the world uses our systems, now a lot,\u201d said Sam Altman, whose comments at a Friday (April 11) TED 2025 event were reported by Seeking Alpha.\u00a0\nHost Chris Anderson pointed out that Altman had said his company\u2019s user base was growing rapidly, doubling in a \u201cjust a few weeks.\u201d\nThe report noted that OpenAI\u2019s growth has been helped along by viral features like the ability to generate images and videos in a range of styles, such as that of legendary Japanese animation studio, Studio Ghibli.\nLast month, Altman said the company, maker of ChatGPT, had added a million users in one hour. Asked during the TED event if the company had considered compensating artists for creating works in their style, Altman said there could be prompts that could trigger payments for specific artists.\n\u201cI think it would be cool to figure out a new model where if you say, \u2018I want to do it in the name of this artist,\u2019 and they opt in, there\u2019s a revenue model there,\u201d Altman said.\nAltman added the company had guidelines to prevent the AI model from generating images\u00a0in the styles of specific artists or creators. He also discussed the company\u2019s work on AI agents, models that can operate autonomously on behalf of users.\nIn other AI news, PYMNTS wrote last week about ways the technology can help companies hoping to alleviate the cost of new tariffs. While those levies will eat into the bottom line of many businesses, AI can help reduce costs while ensuring productivity stays up.\nResearch by PYMNTS Intelligence has shown that 82% of workers who use generative AI at least weekly say it increases productivity, even though half of these workers also worry that AI would replace them at their jobs.\n\u201cAI can also facilitate material selection by assessing availability, compliance and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,\u201d said Tarun Chandrasekhar, president and CPO at Syndigo.\nStill, Pierre Lapr\u00e9e, chief product officer of SpendHQ, told PYMNTS that while AI has a part to play, it\u2019s \u201cmisguided\u201d to believe that AI will automatically offset rising costs from shifts in trade policy.\n\u201cTariffs are complex, and so is procurement,\u201d he said. \u201cYou need more than an algorithm \u2014 you need clean, structured, specific data. Without that, AI won\u2019t reduce risk. It will amplify it.\u201d\n \nThe post Sam Altman: OpenAI Has Reached Roughly 800 Million Users appeared first on PYMNTS.com.", "date_published": "2025-04-13T20:13:31-04:00", "date_modified": "2025-04-13T20:38:28-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/OpenAI-Altman.jpg", "tags": [ "AI", "artificial intelligence", "ChatGPT", "GenAI", "generative AI", "News", "OpenAI", "PYMNTS News", "Sam Altman", "What's Hot", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2683482", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/safe-superintelligences-value-jumps-sixfold-after-6-billion-funding-round/", "title": "OpenAI Co-Founder\u2019s Firm Value Jumps Sixfold After $6 Billion Funding Round", "content_html": "Artificial intelligence (AI) startup Safe Superintelligence (SSI) has reportedly raised $6 billion in new funding.
\nThe round, as reported Friday (April 11) by the Financial Times (FT), values the company at $32 billion, a more than sixfold increase from the last time the firm raised money.
\nIt\u2019s the latest sign for continued investor enthusiasm in AI companies, even if \u2014 as the FT report noted\u00a0\u2014 the company in question does not yet have a product.
\nThe company has not provided much info on how it plans to overtake the likes of OpenAI and Anthropic, but co-founder Ilya Sutskever told the FT last year that he and his team had \u201cidentified a new mountain to climb that\u2019s a bit different from what I was working on previously.\u201d
\nSources told the news outlet that SSI has been closed-mouthed even with its backers, though three sources close to the company said SSI was working on \u201cunique ways\u201d of building and scaling AI models.
\nA separate report from Reuters \u2014 also citing unnamed sources \u2014 said that Google and Nvidia were among the investors in this round. PYMNTS has contacted SSI for comment but has not yet gotten a reply.
\nSSI was launched last year by Sutskever \u2014 former chief scientist at OpenAI \u2014 Apple AI vet Daniel Gross and AI researcher Daniel Levy.
\nIn the fall of 2023, Sutskever was involved in an unsuccessful attempt to oust OpenAI CEO Sam Altman from his position. Sutskever announced he would leave the company in May, apparently on good terms with Altman.
\nStill, SSI has taken a different approach than OpenAI, PYMNTS wrote last year, focusing solely on developing safe superintelligence without pressure from commercial interests.
\n\u201cThis has reignited the debate over the possibility of achieving such a feat, with some experts questioning the feasibility of creating a superintelligent AI, given the current limitations of AI systems and the challenges in ensuring its safety,\u201d that report said.
\n\u201cCritics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding.\u201d
\nThese critics contend that the jump from narrow AI, which excels at specific tasks, to a general intelligence that exceeds human capabilities requires more than just increasing computational power or data.
\nSkeptics also argue that the challenges involved in creating a safe superintelligence could be insurmountable, due to humanity\u2019s understanding of AI and the technology\u2019s limitations.
\nThe post OpenAI Co-Founder\u2019s Firm Value Jumps Sixfold After $6 Billion Funding Round appeared first on PYMNTS.com.
\n", "content_text": "Artificial intelligence (AI) startup Safe Superintelligence (SSI) has reportedly raised $6 billion in new funding.\nThe round, as reported Friday (April 11) by the Financial Times (FT), values the company at $32 billion, a more than sixfold increase from the last time the firm raised money.\nIt\u2019s the latest sign for continued investor enthusiasm in AI companies, even if \u2014 as the FT report noted\u00a0\u2014 the company in question does not yet have a product.\nThe company has not provided much info on how it plans to overtake the likes of OpenAI and Anthropic, but co-founder Ilya Sutskever told the FT last year that he and his team had \u201cidentified a new mountain to climb that\u2019s a bit different from what I was working on previously.\u201d\nSources told the news outlet that SSI has been closed-mouthed even with its backers, though three sources close to the company said SSI was working on \u201cunique ways\u201d of building and scaling AI models.\nA separate report from Reuters \u2014 also citing unnamed sources \u2014 said that Google and Nvidia were among the investors in this round. PYMNTS has contacted SSI for comment but has not yet gotten a reply.\nSSI was launched last year by Sutskever \u2014 former chief scientist at OpenAI \u2014 Apple AI vet Daniel Gross and AI researcher Daniel Levy.\nIn the fall of 2023, Sutskever was involved in an unsuccessful attempt to oust OpenAI CEO Sam Altman from his position. Sutskever announced he would leave the company in May, apparently on good terms with Altman.\nStill, SSI has taken a different approach than OpenAI, PYMNTS wrote last year, focusing solely on developing safe superintelligence without pressure from commercial interests.\n\u201cThis has reignited the debate over the possibility of achieving such a feat, with some experts questioning the feasibility of creating a superintelligent AI, given the current limitations of AI systems and the challenges in ensuring its safety,\u201d that report said.\n\u201cCritics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding.\u201d\nThese critics contend that the jump from narrow AI, which excels at specific tasks, to a general intelligence that exceeds human capabilities requires more than just increasing computational power or data.\nSkeptics also argue that the challenges involved in creating a safe superintelligence could be insurmountable, due to humanity\u2019s understanding of AI and the technology\u2019s limitations.\nThe post OpenAI Co-Founder\u2019s Firm Value Jumps Sixfold After $6 Billion Funding Round appeared first on PYMNTS.com.", "date_published": "2025-04-13T17:27:26-04:00", "date_modified": "2025-04-13T21:53:39-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Safe-Superintelligence.jpg", "tags": [ "AI", "AI funding", "AI Investment", "artificial intelligence", "Ilya Sutskever", "News", "OpenAI", "PYMNTS News", "Safe Superintelligence", "SSI", "What's Hot", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2588175", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/salesforce-svp-enterprise-ai-driving-growth-in-data-cloud-business/", "title": "Salesforce Bets Big on Enterprise AI to Drive Growth", "content_html": "Customer relationship management (CRM) giant Salesforce is seeing \u201cmassive\u201d growth in its data cloud platform, fueled by interest in generative and agentic artificial intelligence (AI) from enterprises, according to one of the company\u2019s senior executives.
\nIn an exclusive interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said the company is seeing robust demand for its data cloud services as enterprises realize they have to better organize their data to fully tap the power of AI.
\n\u201cWe make enterprise data ready for the agentic era,\u201d Tao said, adding that the platform has seen strong growth in recent years. \u201cWe\u2019re very excited to keep going and make agentic experiences world class.\u201d
\nIn the fiscal year ended Jan. 31, Salesforce Data Cloud booked $900 million in revenue, up 120% year over year. The company said the platform is used by nearly half of Fortune 100 companies. In Q4 alone, all of its top 10 deals included both AI and data cloud components.
\nWhy all this interest in data alongside AI?
\nAI is fueled by data. Without data, AI is useless. AI is trained by data. It analyzes, makes predictions and learns new capabilities using data. It ingests text, images, videos, audio, code, sensor readings, math and other types of data \u2014\u00a0whether they reside in PDFs, emails, social media posts, spreadsheets, CRM systems, HR databases and many more media.
\nBut in most companies, data is not organized well. It is in many places, batches are closed off from each other. Before data is ready for AI, it must be cleaned, unified, well-structured, governed and \u2014 ideally \u2014 real-time and searchable. Only then can AI be accurate, relevant and safe.
\n\u201cWhenever companies say they have \u2018unified\u2019 data, a lot of times what that means is they\u2019ve centralized the storage,\u201d Tao said. The reality is that in many cases, \u201cthey\u2019re not able to unlock the value of the data in real time for business applications and business agents.\u201d
\nAccording to a 2024 Harvard Business Review article, senior managers often complain \u201cthey don\u2019t have data they really need and don\u2019t trust what they have. \u2026 The hype around AI exacerbates those concerns.\u201d
\nSalesforce has been tackling this unglamorous but critical problem. Tao joined the company in 2019 to solve this problem by creating Data Cloud.
\nAccording to Gartner Peer Insights, the platform was given four out of five stars among 119 companies that have used it. However, in 2024, one 2-star review cited cost as a concern and a handful of 3-star reviews cited lack of good documentation, limited features and integration issues.
\nSalesforce\u2019s Data Cloud competitors include Adobe Real-Time CDP, Oracle Unity Customer Data Platform, among others.
\nRead more: Salesforce to Launch AI Agents and Cloud-Based POS for Retailers
\nTo be sure, problems with organizational data have been around for decades. Many vendors offer to organize an enterprise\u2019s data, such as Snowflake, Databricks and the cloud computing giants. They offer to unify data into data lakes or data warehouses.
\nSince the problem is not new, many companies are actually in some phase of data organization, Tao said. But unifying data remains a big lift for many.
\n\u201cProbably 97% of the customers I talk to today, they have some form of data lake, data warehouse. Universally, that is pretty much true,\u201d Tao said. \u201cAt the same time, the data is quite trapped in those places.\u201d
\nThat\u2019s because they were built for analysts and data scientists, she said. For business users, \u201cthere is very limited ability to tap into the vastness of all that data.\u201d
\n\u201cHow does that translate to something that makes sense for \u2026 the customer-facing agent performing the sales function?\u201d Tao asked. She said Salesforce set out to make it accessible to business users.
\nOriginally conceived as a customer data platform (CDP) in 2019 focused on marketing use cases, Salesforce Data Cloud has transformed into what Tao calls a \u201cuniversal data activation layer\u201d for all Salesforce applications, spanning sales, service, commerce, analytics and AI.
\nUnlike traditional data platforms that duplicate or copy data through complex extract-transform-load (ETL) processes, data cloud uses a \u201czero copy\u201d architecture \u2014 something the company pioneered, according to Tao.
\nZero copy pulls in metadata from disparate data sources without physically moving the data, allowing companies to harmonize, govern and activate their data in place. That means enterprises don\u2019t need to unravel what they\u2019ve built, Tao explained.
\nTao said the result is a real-time, unified view of the customer that both human and AI agent workers can access without having to replicate or recode business logic across hundreds of systems. Since the AI uses the company\u2019s own data, it mitigates hallucinations as well, Tao said.
\nSalesforce also offers a governance layer that lets companies do things like set access permissions for both human and AI agent workers. For example, companies can specify what data an entry-level sales representative can see and which objects a chatbot can access.
\nThis ability will be key when using multilayers of AI agents, she said.
\nThe post Salesforce Bets Big on Enterprise AI to Drive Growth appeared first on PYMNTS.com.
\n", "content_text": "Customer relationship management (CRM) giant Salesforce is seeing \u201cmassive\u201d growth in its data cloud platform, fueled by interest in generative and agentic artificial intelligence (AI) from enterprises, according to one of the company\u2019s senior executives.\nIn an exclusive interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said the company is seeing robust demand for its data cloud services as enterprises realize they have to better organize their data to fully tap the power of AI.\n\u201cWe make enterprise data ready for the agentic era,\u201d Tao said, adding that the platform has seen strong growth in recent years. \u201cWe\u2019re very excited to keep going and make agentic experiences world class.\u201d\nIn the fiscal year ended Jan. 31, Salesforce Data Cloud booked $900 million in revenue, up 120% year over year. The company said the platform is used by nearly half of Fortune 100 companies. In Q4 alone, all of its top 10 deals included both AI and data cloud components.\nWhy all this interest in data alongside AI? \nAI is fueled by data. Without data, AI is useless. AI is trained by data. It analyzes, makes predictions and learns new capabilities using data. It ingests text, images, videos, audio, code, sensor readings, math and other types of data \u2014\u00a0whether they reside in PDFs, emails, social media posts, spreadsheets, CRM systems, HR databases and many more media.\nBut in most companies, data is not organized well. It is in many places, batches are closed off from each other. Before data is ready for AI, it must be cleaned, unified, well-structured, governed and \u2014 ideally \u2014 real-time and searchable. Only then can AI be accurate, relevant and safe.\n\u201cWhenever companies say they have \u2018unified\u2019 data, a lot of times what that means is they\u2019ve centralized the storage,\u201d Tao said. The reality is that in many cases, \u201cthey\u2019re not able to unlock the value of the data in real time for business applications and business agents.\u201d\nAccording to a 2024 Harvard Business Review article, senior managers often complain \u201cthey don\u2019t have data they really need and don\u2019t trust what they have. \u2026 The hype around AI exacerbates those concerns.\u201d\nSalesforce has been tackling this unglamorous but critical problem. Tao joined the company in 2019 to solve this problem by creating Data Cloud. \nAccording to Gartner Peer Insights, the platform was given four out of five stars among 119 companies that have used it. However, in 2024, one 2-star review cited cost as a concern and a handful of 3-star reviews cited lack of good documentation, limited features and integration issues.\nSalesforce\u2019s Data Cloud competitors include Adobe Real-Time CDP, Oracle Unity Customer Data Platform, among others.\nRead more: Salesforce to Launch AI Agents and Cloud-Based POS for Retailers\nDown and Dirty With Data\nTo be sure, problems with organizational data have been around for decades. Many vendors offer to organize an enterprise\u2019s data, such as Snowflake, Databricks and the cloud computing giants. They offer to unify data into data lakes or data warehouses. \nSince the problem is not new, many companies are actually in some phase of data organization, Tao said. But unifying data remains a big lift for many.\n\u201cProbably 97% of the customers I talk to today, they have some form of data lake, data warehouse. Universally, that is pretty much true,\u201d Tao said. \u201cAt the same time, the data is quite trapped in those places.\u201d\nThat\u2019s because they were built for analysts and data scientists, she said. For business users, \u201cthere is very limited ability to tap into the vastness of all that data.\u201d \n\u201cHow does that translate to something that makes sense for \u2026 the customer-facing agent performing the sales function?\u201d Tao asked. She said Salesforce set out to make it accessible to business users.\nOriginally conceived as a customer data platform (CDP) in 2019 focused on marketing use cases, Salesforce Data Cloud has transformed into what Tao calls a \u201cuniversal data activation layer\u201d for all Salesforce applications, spanning sales, service, commerce, analytics and AI.\nUnlike traditional data platforms that duplicate or copy data through complex extract-transform-load (ETL) processes, data cloud uses a \u201czero copy\u201d architecture \u2014 something the company pioneered, according to Tao.\nZero copy pulls in metadata from disparate data sources without physically moving the data, allowing companies to harmonize, govern and activate their data in place. That means enterprises don\u2019t need to unravel what they\u2019ve built, Tao explained.\nTao said the result is a real-time, unified view of the customer that both human and AI agent workers can access without having to replicate or recode business logic across hundreds of systems. Since the AI uses the company\u2019s own data, it mitigates hallucinations as well, Tao said.\nSalesforce also offers a governance layer that lets companies do things like set access permissions for both human and AI agent workers. For example, companies can specify what data an entry-level sales representative can see and which objects a chatbot can access.\nThis ability will be key when using multilayers of AI agents, she said.\nThe post Salesforce Bets Big on Enterprise AI to Drive Growth appeared first on PYMNTS.com.", "date_published": "2025-04-10T12:05:46-04:00", "date_modified": "2025-04-13T22:37:19-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Salesforce-AI-1.jpg", "tags": [ "AI", "artificial intelligence", "B2B", "B2B Payments", "commercial payments", "data analysis", "data organization", "data silos", "Gabrielle Tao", "GenAI", "generative AI", "News", "PYMNTS News", "Salesforce", "Salesforce Data Cloud", "What's Hot In B2B", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2681036", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/anthropic-debuts-200-per-month-subscription-to-claude-ai-model/", "title": "Anthropic Debuts $200-per-Month Subscription to Claude AI Model", "content_html": "Anthropic\u00a0has debuted a higher-priced subscription version of its Claude artificial intelligence (AI) chatbot.
\nThe startup\u2019s Max plan,\u00a0announced\u00a0Wednesday (April 9), is designed for users who work extensively with Claude and require expanded access for their most important tasks. This plan also gives users priority access to Anthropic\u2019s newest features and models.
\n\u201cThe top request from our most active users has been expanded Claude access,\u201d the announcement said. \u201cThe new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.\u201d
\nAccording to Anthropic, the Max plan lets users choose from two levels: \u201cExpanded Usage,\u201d a plan priced at $100 per month and designed for frequent users who work with Claude on a variety of tasks; and Maximum Flexibility, costing $200 per month, for daily users who collaborate often with Claude for most tasks.
\n\u201cMore usage unlocks more possibilities to collaborate with Claude, whether for work or life,\u201d the company said.
\n\u201cAt work, it means you can build Projects around any work task\u2014whether it be writing, software, or data analysis\u2014and really collaborate with Claude until your outcomes are just right.\u201d
\nAnthropic rival artificial intelligence startup OpenAI debuted a\u00a0$200-per-month \u201cresearch grade\u201d\u00a0version of its ChatGPT tool in December.
\nLast week, Anthropic rolled out\u00a0Claude for Education, which lets universities come up with and implement AI-powered approaches to teaching, learning and administration.
\nThe company said recently that it was more interested in developing generalist foundation AI models for enterprise users than for building hardware or consumer entertainment offerings.
\nSpeaking at the HumanX conference in March, Anthropic Chief Product Officer Mike Krieger said the company was focused on balancing research breakthroughs and product development.
\n\u201cWe want to help people get work done, whether it\u2019s code, whether it\u2019s knowledge work, etc.,\u201d he said. \u201cAnd then you can then\u00a0imagine different manifestations\u00a0of that\u201d in applications for the consumer, small business and up to large corporations and the C-suite.
\nAlso Wednesday, PYMNTS spoke with AI experts on Meta\u2019s\u00a0newest Llama AI model, which features a context window of up to 10 million tokens \u2014 or around 7.5 million words \u2014 almost 10 times the amount of Google\u2019s Gemini 2.5.
\nIlia Badeev, head of data science at\u00a0Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5\u2019s 1 million context window when Llama 4 launched.
\n\u201cThis is an\u00a0enormous number. With 17 billion active parameters, we get a \u2018mini\u2019 level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,\u201d Badeev said. \u201cWith enough context, Llama 4 Scout\u2019s performance on specific applied tasks could be significantly better than many state-of-the-art models.\u201d
\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.
\nThe post Anthropic Debuts $200-per-Month Subscription to Claude AI Model appeared first on PYMNTS.com.
\n", "content_text": "Anthropic\u00a0has debuted a higher-priced subscription version of its Claude artificial intelligence (AI) chatbot.\nThe startup\u2019s Max plan,\u00a0announced\u00a0Wednesday (April 9), is designed for users who work extensively with Claude and require expanded access for their most important tasks. This plan also gives users priority access to Anthropic\u2019s newest features and models.\n\u201cThe top request from our most active users has been expanded Claude access,\u201d the announcement said. \u201cThe new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.\u201d\nAccording to Anthropic, the Max plan lets users choose from two levels: \u201cExpanded Usage,\u201d a plan priced at $100 per month and designed for frequent users who work with Claude on a variety of tasks; and Maximum Flexibility, costing $200 per month, for daily users who collaborate often with Claude for most tasks.\n\u201cMore usage unlocks more possibilities to collaborate with Claude, whether for work or life,\u201d the company said.\n\u201cAt work, it means you can build Projects around any work task\u2014whether it be writing, software, or data analysis\u2014and really collaborate with Claude until your outcomes are just right.\u201d\nAnthropic rival artificial intelligence startup OpenAI debuted a\u00a0$200-per-month \u201cresearch grade\u201d\u00a0version of its ChatGPT tool in December.\nLast week, Anthropic rolled out\u00a0Claude for Education, which lets universities come up with and implement AI-powered approaches to teaching, learning and administration.\nThe company said recently that it was more interested in developing generalist foundation AI models for enterprise users than for building hardware or consumer entertainment offerings.\nSpeaking at the HumanX conference in March, Anthropic Chief Product Officer Mike Krieger said the company was focused on balancing research breakthroughs and product development.\n\u201cWe want to help people get work done, whether it\u2019s code, whether it\u2019s knowledge work, etc.,\u201d he said. \u201cAnd then you can then\u00a0imagine different manifestations\u00a0of that\u201d in applications for the consumer, small business and up to large corporations and the C-suite.\nAlso Wednesday, PYMNTS spoke with AI experts on Meta\u2019s\u00a0newest Llama AI model, which features a context window of up to 10 million tokens \u2014 or around 7.5 million words \u2014 almost 10 times the amount of Google\u2019s Gemini 2.5.\nIlia Badeev, head of data science at\u00a0Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5\u2019s 1 million context window when Llama 4 launched.\n\u201cThis is an\u00a0enormous number. With 17 billion active parameters, we get a \u2018mini\u2019 level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,\u201d Badeev said. \u201cWith enough context, Llama 4 Scout\u2019s performance on specific applied tasks could be significantly better than many state-of-the-art models.\u201d\n\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.\n\nThe post Anthropic Debuts $200-per-Month Subscription to Claude AI Model appeared first on PYMNTS.com.", "date_published": "2025-04-09T14:41:01-04:00", "date_modified": "2025-04-09T14:41:01-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2023/10/Anthropic-AI-Artificial-Intelligence.jpg", "tags": [ "AI", "AI models", "AI pricing", "Anthropic", "artificial intelligence", "chatbots", "Claude", "News", "OpenAI", "PYMNTS News", "subscriptions", "Technology", "What's Hot", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2675714", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/metas-llama-4-models-are-bad-for-rivals-but-good-for-enterprises-experts-say/", "title": "Meta\u2019s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say", "content_html": "Meta\u2019s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others.
\nBut it\u2019s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts.
\nThe social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta\u2019s first natively multimodal models \u2014 meaning they were built from the ground up to handle text and images; these capabilities were not bolted on.
\nLlama 4 Scout\u2019s unique proposition: It has a context window of up to 10 million tokens, which translates to around 7.5 million words. The record holder to date is Google\u2019s Gemini 2.5 \u2014 at 1 million and going to 2.
\nThe bigger the context window \u2014 the area where users enter the prompt \u2014 the more data and documents one can upload to the AI chatbot.
\nIlia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5\u2019s 1 million context window when Llama 4 Scout comes along with 10 million.
\n\u201cThis is an enormous number. With 17 billion active parameters, we get a \u2018mini\u2019 level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,\u201d Badeev said. \u201cWith enough context, Llama 4 Scout\u2019s performance on specific applied tasks could be significantly better than many state-of-the-art models.\u201d
\nRead more: Meta Adds \u2018Multimodal\u2019 Models to Its Llama AI Stable
\nBoth Llama 4 Scout and Maverick have 17 billion active parameters, meaning the number of settings that are activated at one time. In total, however, Scout has 109 billion parameters and Maverick has 400 billion.
\nMeta also said Llama 4 Maverick is cheaper to run: between 19 and 49 cents per million tokens for input (query) and output (response); it runs on one Nvidia H100 DGX server.
\nThe pricing compares with $4.38 for OpenAI\u2019s GPT-4o. Gemini 2.0 Flash costs 17 cents per million tokens while DeepSeek v3.1 costs 48 cents. (While Meta is not in the business of selling AI services, it still seeks to minimize AI costs for itself.)
\n\u201cOne of the biggest blockers to deploying AI has been cost,\u201d Chintan Mota, director of enterprise technology at Wipro, told PYMNTS. \u201cThe infrastructure, the inference, the lock-in \u2014 it all adds up.\u201d
\nHowever, open-source models like Llama 4, DeepSeek and others are enabling companies to build a model fine-tuned to their businesses, trained on their own data and running in their environment, Mota said. \u201cYou\u2019re not stuck waiting for a Gemini or (OpenAI\u2019s) GPT feature release. You have more control over your own data and its security.\u201d
\nMeta\u2019s open-source Llama family will \u201cput pressure on closed models like Gemini. Not because Llama is better, but because it\u2019s good enough,\u201d Mota added. \u201cFor 80% of business use cases \u2014 automating reports, building internal copilots, summarizing knowledge bases \u2014 \u2018good enough\u2019 and affordable beats \u2018perfect\u2019 and pricey.\u201d
\nRead more: Musk\u2019s Grok 3 Takes Aim at Perplexity, OpenAI
\nLlama 4 Scout and Maverick have a mixture-of-experts (MoE) architecture \u2014 meaning they don\u2019t activate all the \u201cexpert\u201d bots for all tasks. Instead, they pick and choose the right ones \u2014 for speed and to save money.
\nThey were pre-trained on 200 languages, half with over 1 billion tokens each. Meta said this is 10 times more multilingual tokens than Llama 3.
\nMeta said Scout and Maverick were taught by Llama 4 Behemoth, a 2-trillion-parameter model that\u2019s still in training. It is in preview.
\n\u201cThe three Llama 4 models are geared toward reasoning, coding, and step-by-step problem-solving. However, they do not appear to exhibit the deeper chain-of-thought behavior seen in specialized reasoning models like OpenAI\u2019s \u2018o\u2019 series or DeepSeek R1,\u201d Rogers Jeffrey Leo John, co-founder and CTO of DataChat, told PYMNTS.
\n\u201cStill, despite not being the absolute best model available, LLama 4 outperforms several leading closed-source alternatives on various benchmarks,\u201d John added.
\nFinally, Meta said it made Llama 4 less prone to punting questions it deems too sensitive \u2014 to be more \u201ccomparable to Grok,\u201d the AI model from Elon Musk\u2019s AI startup, xAI. The latest version, Grok 3, is designed to \u201crelentlessly seek the truth,\u201d according to xAI.
\nAccording to Meta, \u201cour goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.\u201d It is less censorious than Llama 3.
\nFor example, Llama 4 refuses to answer queries related to debated political and social topics less than 2% of the time, compared with 7% for Llama 3.3. Meta claims that Llama 4 is more \u201cbalanced\u201d in choosing which prompts not to answer and is getting better at staying politically neutral.
\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.
\nThe post Meta\u2019s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say appeared first on PYMNTS.com.
\n", "content_text": "Meta\u2019s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others.\nBut it\u2019s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts.\nThe social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta\u2019s first natively multimodal models \u2014 meaning they were built from the ground up to handle text and images; these capabilities were not bolted on.\nLlama 4 Scout\u2019s unique proposition: It has a context window of up to 10 million tokens, which translates to around 7.5 million words. The record holder to date is Google\u2019s Gemini 2.5 \u2014 at 1 million and going to 2.\nThe bigger the context window \u2014 the area where users enter the prompt \u2014 the more data and documents one can upload to the AI chatbot.\nIlia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5\u2019s 1 million context window when Llama 4 Scout comes along with 10 million.\n\u201cThis is an enormous number. With 17 billion active parameters, we get a \u2018mini\u2019 level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,\u201d Badeev said. \u201cWith enough context, Llama 4 Scout\u2019s performance on specific applied tasks could be significantly better than many state-of-the-art models.\u201d\nRead more: Meta Adds \u2018Multimodal\u2019 Models to Its Llama AI Stable\nOnly 1 Nvidia H100 Host Needed\nBoth Llama 4 Scout and Maverick have 17 billion active parameters, meaning the number of settings that are activated at one time. In total, however, Scout has 109 billion parameters and Maverick has 400 billion.\nMeta also said Llama 4 Maverick is cheaper to run: between 19 and 49 cents per million tokens for input (query) and output (response); it runs on one Nvidia H100 DGX server.\nThe pricing compares with $4.38 for OpenAI\u2019s GPT-4o. Gemini 2.0 Flash costs 17 cents per million tokens while DeepSeek v3.1 costs 48 cents. (While Meta is not in the business of selling AI services, it still seeks to minimize AI costs for itself.)\n\u201cOne of the biggest blockers to deploying AI has been cost,\u201d Chintan Mota, director of enterprise technology at Wipro, told PYMNTS. \u201cThe infrastructure, the inference, the lock-in \u2014 it all adds up.\u201d\nHowever, open-source models like Llama 4, DeepSeek and others are enabling companies to build a model fine-tuned to their businesses, trained on their own data and running in their environment, Mota said. \u201cYou\u2019re not stuck waiting for a Gemini or (OpenAI\u2019s) GPT feature release. You have more control over your own data and its security.\u201d\nMeta\u2019s open-source Llama family will \u201cput pressure on closed models like Gemini. Not because Llama is better, but because it\u2019s good enough,\u201d Mota added. \u201cFor 80% of business use cases \u2014 automating reports, building internal copilots, summarizing knowledge bases \u2014 \u2018good enough\u2019 and affordable beats \u2018perfect\u2019 and pricey.\u201d\nRead more: Musk\u2019s Grok 3 Takes Aim at Perplexity, OpenAI\nFewer Filters, Just Like Grok\nLlama 4 Scout and Maverick have a mixture-of-experts (MoE) architecture \u2014 meaning they don\u2019t activate all the \u201cexpert\u201d bots for all tasks. Instead, they pick and choose the right ones \u2014 for speed and to save money.\nThey were pre-trained on 200 languages, half with over 1 billion tokens each. Meta said this is 10 times more multilingual tokens than Llama 3.\nMeta said Scout and Maverick were taught by Llama 4 Behemoth, a 2-trillion-parameter model that\u2019s still in training. It is in preview.\n\u201cThe three Llama 4 models are geared toward reasoning, coding, and step-by-step problem-solving. However, they do not appear to exhibit the deeper chain-of-thought behavior seen in specialized reasoning models like OpenAI\u2019s \u2018o\u2019 series or DeepSeek R1,\u201d Rogers Jeffrey Leo John, co-founder and CTO of DataChat, told PYMNTS.\n\u201cStill, despite not being the absolute best model available, LLama 4 outperforms several leading closed-source alternatives on various benchmarks,\u201d John added.\nFinally, Meta said it made Llama 4 less prone to punting questions it deems too sensitive \u2014 to be more \u201ccomparable to Grok,\u201d the AI model from Elon Musk\u2019s AI startup, xAI. The latest version, Grok 3, is designed to \u201crelentlessly seek the truth,\u201d according to xAI.\nAccording to Meta, \u201cour goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.\u201d It is less censorious than Llama 3.\nFor example, Llama 4 refuses to answer queries related to debated political and social topics less than 2% of the time, compared with 7% for Llama 3.3. Meta claims that Llama 4 is more \u201cbalanced\u201d in choosing which prompts not to answer and is getting better at staying politically neutral.\n\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.\n\nThe post Meta\u2019s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say appeared first on PYMNTS.com.", "date_published": "2025-04-09T09:00:34-04:00", "date_modified": "2025-04-09T08:46:47-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Meta-Llama-4-models.png", "tags": [ "AI", "AI chatbots", "AI models", "artificial intelligence", "chatbots", "DeepSeek", "Llama 4 AI", "Meta", "Meta Llama 4 AI", "News", "NVIDIA", "open source AI", "PYMNTS News", "Social Media", "Technology", "xAI", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2633804", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/5-ways-ai-can-help-mitigate-the-impact-of-tariffs-on-business/", "title": "5 Ways AI Can Help Mitigate the Impact of Tariffs on Business", "content_html": "The Trump tariffs are continuing to roil the business world, plunging the U.S. stock market dangerously close to bear territory. External forces like trade policies mean companies have limited leeway on how to protect themselves.
\n\u201cTariffs, like any crisis, are extremely dynamic \u2014 and the latest round that imposed tariffs on all U.S. importers is a perfect example,\u201d Leagh Turner, CEO of Coupa Software, told PYMNTS.
\n\u201cThey impact businesses in different ways depending on their country, product type and trade relationships. That makes it difficult for leaders to predict the full impact to their business.\u201d
\nBut artificial intelligence (AI) can help, despite the daily turbulence. A Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility, according to Stephan Liozu, chief value officer.
\nBefore diving into AI solutions, companies must first assess the following, according to Praful Saklani, CEO of Pramata.
\nHere are ways companies can use AI to navigate tariffs or cut costs, according to executives.
\n1. Use AI to monitor and understand shifting tariff policies in real time, allowing businesses to pivot more quickly.
\n\u201cAI-powered trade policy monitoring scans government announcements and regulatory updates to forecast potential tariff shifts,\u201d Tarun Chandrasekhar, president and CPO at Syndigo, told PYMNTS.
\nHe added that historical analysis of past trade policies and macroeconomic trends identifies patterns that can give brands insight into how possible future tariff increases or decreases could impact them, such as how tariffs on specific materials affected sales of certain clothing items.
\n2. Use AI to find new sources for raw materials and other supplies.
\n\u201cAI can also facilitate material selection by assessing availability, compliance, and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,\u201d Chandrasekhar said.
\nVarious AI models optimize for price, quality, and time delivery, while minimizing the disruption for your own customers, said Vaclav Vincalek, CTO at Hiswai, in comments to PYMNTS.
\nChandrasekhar said AI can make tariff classification and compliance \u201csignificantly\u201d easier as well, to avoid penalties and overpayment issues. Automated classification systems can scan product attributes to assign correct harmonized system codes, minimizing the risk of misclassification.
\nRead more: 82% of US Workforce Believes GenAI Boosts Productivity
\n3. Improve supplier resiliency and scenario planning with AI.
\n\u201cTap into buyer-supplier networks to run different scenarios to find near-shore or offshore suppliers, negotiate terms and reroute supply chains \u2014 rapidly. Essentially, enabling the quick pivot,\u201d Turner said.
\nShe also recommended optimizing operations to improve on-hand inventory and cash. The availability of planning and forecasting tools means companies can compare supplier pricing, data and risks to optimize inventory, giving them a cushion, Turner said.
\nIt\u2019s also important to assume total control over the spending lifecycle. Having insights and a holistic view of spending and suppliers sets up companies to make agile changes in the supply chain, she added.
\n4. AI can help increase efficiency, reduce costs and raise worker productivity.
\nTariffs will cut into the bottom line of many companies but AI can help keep costs down while ensuring productivity stays up.
\nAccording to a January 2025 PYMNTS Intelligence report, \u201cGenAI: A Generational Look at AI Usage and Attitudes,\u201d 82% of workers who use generative AI at least weekly say it increases productivity. But half of these workers also worry that AI would take their jobs.
\nAs for cost reduction, Shopify CEO Tobi Lutke is looking to save money by using AI instead of hiring more workers.
\nIn a memo to employees he posted\u00a0on X, Lutke wrote that \u201cbefore asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.\u201d
\n5. Be realistic about how much AI can help and use other strategies as well.
\nPierre Lapr\u00e9e, chief product officer of SpendHQ, told PYMNTS that while AI has a role to play, it\u2019s \u201cmisguided\u201d to believe that AI will automatically offset rising costs from trade policy shifts.
\n\u201cTariffs are complex, and so is procurement. You need more than an algorithm \u2014 you need clean, structured, specific data. Without that, AI won\u2019t reduce risk. It will amplify it,\u201d Lapr\u00e9e said.
\nPaul Magel, president of the supply chain tech division at CGS, agreed. He told PYMNTS that the data feeding into the AI systems must be clean and accurate for it to work optimally. \u201cAI is not a panacea,\u201d Magel said. \u201cIt\u2019s incredibly helpful but requires the right approach to be effective.\u201d
\nThe post 5 Ways AI Can Help Mitigate the Impact of Tariffs on Business appeared first on PYMNTS.com.
\n", "content_text": "The Trump tariffs are continuing to roil the business world, plunging the U.S. stock market dangerously close to bear territory. External forces like trade policies mean companies have limited leeway on how to protect themselves.\n\u201cTariffs, like any crisis, are extremely dynamic \u2014 and the latest round that imposed tariffs on all U.S. importers is a perfect example,\u201d Leagh Turner, CEO of Coupa Software, told PYMNTS.\n\u201cThey impact businesses in different ways depending on their country, product type and trade relationships. That makes it difficult for leaders to predict the full impact to their business.\u201d\nBut artificial intelligence (AI) can help, despite the daily turbulence. A Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility, according to Stephan Liozu, chief value officer.\nBefore diving into AI solutions, companies must first assess the following, according to Praful Saklani, CEO of Pramata.\n\nAssess areas of the business potentially exposed to higher costs.\nDetermine the scale of the exposure by contract and relationship.\nUnderstand the tools and strategies at one\u2019s disposal.\n\nHere are ways companies can use AI to navigate tariffs or cut costs, according to executives.\n1. Use AI to monitor and understand shifting tariff policies in real time, allowing businesses to pivot more quickly.\n\u201cAI-powered trade policy monitoring scans government announcements and regulatory updates to forecast potential tariff shifts,\u201d Tarun Chandrasekhar, president and CPO at Syndigo, told PYMNTS.\nHe added that historical analysis of past trade policies and macroeconomic trends identifies patterns that can give brands insight into how possible future tariff increases or decreases could impact them, such as how tariffs on specific materials affected sales of certain clothing items.\n2. Use AI to find new sources for raw materials and other supplies.\n\u201cAI can also facilitate material selection by assessing availability, compliance, and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,\u201d Chandrasekhar said.\nVarious AI models optimize for price, quality, and time delivery, while minimizing the disruption for your own customers, said Vaclav Vincalek, CTO at Hiswai, in comments to PYMNTS.\nChandrasekhar said AI can make tariff classification and compliance \u201csignificantly\u201d easier as well, to avoid penalties and overpayment issues. Automated classification systems can scan product attributes to assign correct harmonized system codes, minimizing the risk of misclassification.\nRead more: 82% of US Workforce Believes GenAI Boosts Productivity\n3. Improve supplier resiliency and scenario planning with AI. \n\u201cTap into buyer-supplier networks to run different scenarios to find near-shore or offshore suppliers, negotiate terms and reroute supply chains \u2014 rapidly. Essentially, enabling the quick pivot,\u201d Turner said.\nShe also recommended optimizing operations to improve on-hand inventory and cash. The availability of planning and forecasting tools means companies can compare supplier pricing, data and risks to optimize inventory, giving them a cushion, Turner said.\nIt\u2019s also important to assume total control over the spending lifecycle. Having insights and a holistic view of spending and suppliers sets up companies to make agile changes in the supply chain, she added.\n4. AI can help increase efficiency, reduce costs and raise worker productivity.\nTariffs will cut into the bottom line of many companies but AI can help keep costs down while ensuring productivity stays up.\nAccording to a January 2025 PYMNTS Intelligence report, \u201cGenAI: A Generational Look at AI Usage and Attitudes,\u201d 82% of workers who use generative AI at least weekly say it increases productivity. But half of these workers also worry that AI would take their jobs.\nAs for cost reduction, Shopify CEO Tobi Lutke is looking to save money by using AI instead of hiring more workers.\nIn a memo to employees he posted\u00a0on X, Lutke wrote that \u201cbefore asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.\u201d\n5. Be realistic about how much AI can help and use other strategies as well.\nPierre Lapr\u00e9e, chief product officer of SpendHQ, told PYMNTS that while AI has a role to play, it\u2019s \u201cmisguided\u201d to believe that AI will automatically offset rising costs from trade policy shifts.\n\u201cTariffs are complex, and so is procurement. You need more than an algorithm \u2014 you need clean, structured, specific data. Without that, AI won\u2019t reduce risk. It will amplify it,\u201d Lapr\u00e9e said.\nPaul Magel, president of the supply chain tech division at CGS, agreed. He told PYMNTS that the data feeding into the AI systems must be clean and accurate for it to work optimally. \u201cAI is not a panacea,\u201d Magel said. \u201cIt\u2019s incredibly helpful but requires the right approach to be effective.\u201d\nThe post 5 Ways AI Can Help Mitigate the Impact of Tariffs on Business appeared first on PYMNTS.com.", "date_published": "2025-04-08T19:13:41-04:00", "date_modified": "2025-04-08T19:13:41-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/AI-tariffs.jpg", "tags": [ "AI", "artificial intelligence", "CGS", "Coupa software", "Donald Trump", "GenAI", "generative AI", "Hiswai", "Leagh Turner", "News", "Paul Magel", "Pierre Lapr\u00e9e", "Praful Saklani", "Pramata", "PYMNTS News", "shopify", "SpendHQ", "Stephan Liozu", "Syndigo", "tariffs", "Tarun Chandrasekhar", "Tobi Lutke", "Vaclav Vincalek", "Zilliant", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2610066", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/ai-startups-signalfire-raises-1-billion-to-invest-in-ai-startups/", "title": "AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups", "content_html": "Venture capital firm SignalFire has raised more than $1 billion in new capital to invest in artificial intelligence (AI) startups.
\nIt now has a total of $3 billion under management, according to a Monday (April 7) news release.
\nThe funding will go toward seed to Series B investments in applied AI. The capital will be deployed through the firm\u2019s Seed, Early, Executive-in-Resident (XIR) and Opportunities funds, per the release.
\nSignalFire uses AI and data to find and develop high-growth startups. Through its Beacon AI technology, the firm analyzes data from 650 million professionals and 80 million organizations to guide investment and operational decisions.
\nBeacon AI spots market and talent trends to help SignalFire investors and portfolio companies build their teams and products, the company said.
\nUnlike traditional VCs adapting to AI, SignalFire was built from the ground up with AI in its DNA. As such, the firm said it could spot breakthrough startups \u201cearlier\u201d and accelerate company growth.
\n\u201cAI\u2019s next frontier isn\u2019t invention, it\u2019s implementation,\u201d SignalFire partner Wayne Hu said in the release. \u201cWith these funds, we\u2019ll continue to back founders who transform theoretical AI technology into market-changing solutions.\u201d
\nRead more: VC Investors Shrink as Money Goes to Big Tech Startups
\nNexthop AI, a startup developing advanced networking solutions for cloud clusters, has emerged from stealth with $110 million in funding.
\nThe round was led by Lightspeed Venture Partners, with backing from Kleiner Perkins, WestBridge, Battery Ventures and Emergent Ventures, according to a press release. The funds will accelerate product development tailored to meet the growing demands of AI training and inference.
\nHyperscalers \u2014 cloud computing giants \u2014 are investing billions annually in their GPU and networking infrastructure. They also require highly optimized software and hardware infrastructure attuned to data center build outs, the startup said.
\n\u201cThe world\u2019s largest cloud providers need a new generation of networking capabilities to keep pace with the demands of AI workloads,\u201d Guru Chahal, partner at Lightspeed Venture Partners, said in the announcement. \u201cNexthop AI is filling a critical gap in this $35 billion market with its deep domain expertise, pioneering technology and customized solutions.\u201d
\nThe company partners with cloud providers, acting as an extension of their engineering teams to deliver scalable, power-efficient artificial intelligence infrastructure.
\nRead also: Nvidia and xAI Sign On to $30 Billion AI Infrastructure Fund
\nOpenAI has launched OpenAI Academy, a free learning hub for all things AI. The education website is open to all, from all types of backgrounds.
\nThe site provides videos, tutorials and other content. People can meet virtually or in person to learn, network and collaborate.
\nTopics include \u201cChatGPT for Data Analysis,\u201d \u201cAdvanced Prompt Engineering,\u201d and \u201cCollaborating with AI: Group Work and Projects Simplified.\u201d
\nThere are also tutorials for AI in education, use of AI in personal life and how to use the company\u2019s video generator Sora, as well as developer courses.
\nOpenAI is not offering certification or accreditation at this time. All lessons are in English, with more languages to come.
\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.
\nThe post AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups appeared first on PYMNTS.com.
\n", "content_text": "Venture capital firm SignalFire has raised more than $1 billion in new capital to invest in artificial intelligence (AI) startups.\nIt now has a total of $3 billion under management, according to a Monday (April 7) news release.\nThe funding will go toward seed to Series B investments in applied AI. The capital will be deployed through the firm\u2019s Seed, Early, Executive-in-Resident (XIR) and Opportunities funds, per the release.\nSignalFire uses AI and data to find and develop high-growth startups. Through its Beacon AI technology, the firm analyzes data from 650 million professionals and 80 million organizations to guide investment and operational decisions.\nBeacon AI spots market and talent trends to help SignalFire investors and portfolio companies build their teams and products, the company said.\nUnlike traditional VCs adapting to AI, SignalFire was built from the ground up with AI in its DNA. As such, the firm said it could spot breakthrough startups \u201cearlier\u201d and accelerate company growth.\n\u201cAI\u2019s next frontier isn\u2019t invention, it\u2019s implementation,\u201d SignalFire partner Wayne Hu said in the release. \u201cWith these funds, we\u2019ll continue to back founders who transform theoretical AI technology into market-changing solutions.\u201d\nRead more: VC Investors Shrink as Money Goes to Big Tech Startups\nAI Infrastructure Startup Nexthop AI Raises $110 Million\nNexthop AI, a startup developing advanced networking solutions for cloud clusters, has emerged from stealth with $110 million in funding.\nThe round was led by Lightspeed Venture Partners, with backing from Kleiner Perkins, WestBridge, Battery Ventures and Emergent Ventures, according to a press release. The funds will accelerate product development tailored to meet the growing demands of AI training and inference.\nHyperscalers \u2014 cloud computing giants \u2014 are investing billions annually in their GPU and networking infrastructure. They also require highly optimized software and hardware infrastructure attuned to data center build outs, the startup said.\n\u201cThe world\u2019s largest cloud providers need a new generation of networking capabilities to keep pace with the demands of AI workloads,\u201d Guru Chahal, partner at Lightspeed Venture Partners, said in the announcement. \u201cNexthop AI is filling a critical gap in this $35 billion market with its deep domain expertise, pioneering technology and customized solutions.\u201d\nThe company partners with cloud providers, acting as an extension of their engineering teams to deliver scalable, power-efficient artificial intelligence infrastructure.\nRead also: Nvidia and xAI Sign On to $30 Billion AI Infrastructure Fund\nOpenAI Opens Free AI Academy\nOpenAI has launched OpenAI Academy, a free learning hub for all things AI. The education website is open to all, from all types of backgrounds.\nThe site provides videos, tutorials and other content. People can meet virtually or in person to learn, network and collaborate.\nTopics include \u201cChatGPT for Data Analysis,\u201d \u201cAdvanced Prompt Engineering,\u201d and \u201cCollaborating with AI: Group Work and Projects Simplified.\u201d\nThere are also tutorials for AI in education, use of AI in personal life and how to use the company\u2019s video generator Sora, as well as developer courses.\nOpenAI is not offering certification or accreditation at this time. All lessons are in English, with more languages to come.\n\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.\n\nThe post AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups appeared first on PYMNTS.com.", "date_published": "2025-04-08T12:00:24-04:00", "date_modified": "2025-04-08T08:49:03-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2024/06/AI-investments-artificial-intelligence.jpg", "tags": [ "AI", "AI startups", "artificial intelligence", "funding", "Investments", "News", "Nexthop AI", "OpenAI", "PYMNTS News", "SignalFire", "startups", "Technology", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2576305", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/63-percent-adoption-rate-shows-genai-enthusiasm-among-younger-consumers/", "title": "63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers", "content_html": "Generative artificial intelligence (GenAI) is gaining traction among U.S. consumers across all age groups, according to a new PYMNTS Intelligence report, even as enthusiasm cools for voice assistants.
\nThe study, titled \u201cGenAI and Voice Assistants: Adoption and Trust Across Generations,\u201d reveals changing consumer preferences concerning GenAI indicating a shift toward newer technologies while trust in older voice assistant technology declines. This trend, ironically observed among the initial proponents of voice assistants, prompts questions about the enduring appeal of technologies in a rapidly innovating landscape.
\nThe PYMNTS special report, which surveyed 2,721 U.S. consumers between June 5 and June 21, 2024, examined consumer habits and opinions regarding both voice assistants and GenAI. The findings highlight a decrease in consumer confidence concerning the future capabilities of voice assistants, with the percentage believing they will become as smart and reliable as humans falling from 73% in March 2023 to 60% in June 2024. This erosion of trust is particularly pronounced among millennials and bridge millennials.
\nConcurrently, GenAI has experienced significant adoption, with 34% of U.S. consumers having used it in the 90 days preceding the survey. The report suggests that the perceived lack of advancement and dependability in voice assistants is contributing to this change, particularly as consumers hold high expectations for technological progress.
\n\n\n\n
Key data points from the report include:
\nThe PYMNTS report underscores the enduring perceived value of GenAI for tasks like information retrieval, maintaining a stable 63% utility rating among U.S. consumers.
\nThe study highlights distinct generational patterns in technology adoption, with zillennials, as digital natives, showing a greater propensity to integrate new tools like GenAI into their daily lives.
\nHowever, the observed decline in trust surrounding voice assistants serves as a lesson about the necessity of consistent performance and reliability for the sustained adoption of any novel technology.
\nThe report\u2019s methodology involved a census-balanced survey of 2,721 U.S. consumers, with an oversampling of the zillennial generation to enable more in-depth analysis.
\nThe post 63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers appeared first on PYMNTS.com.
\n", "content_text": "Generative artificial intelligence (GenAI) is gaining traction among U.S. consumers across all age groups, according to a new PYMNTS Intelligence report, even as enthusiasm cools for voice assistants. \nThe study, titled \u201cGenAI and Voice Assistants: Adoption and Trust Across Generations,\u201d reveals changing consumer preferences concerning GenAI indicating a shift toward newer technologies while trust in older voice assistant technology declines. This trend, ironically observed among the initial proponents of voice assistants, prompts questions about the enduring appeal of technologies in a rapidly innovating landscape.\nThe PYMNTS special report, which surveyed 2,721 U.S. consumers between June 5 and June 21, 2024, examined consumer habits and opinions regarding both voice assistants and GenAI. The findings highlight a decrease in consumer confidence concerning the future capabilities of voice assistants, with the percentage believing they will become as smart and reliable as humans falling from 73% in March 2023 to 60% in June 2024. This erosion of trust is particularly pronounced among millennials and bridge millennials. \nConcurrently, GenAI has experienced significant adoption, with 34% of U.S. consumers having used it in the 90 days preceding the survey. The report suggests that the perceived lack of advancement and dependability in voice assistants is contributing to this change, particularly as consumers hold high expectations for technological progress.\n \n\n\nKey data points from the report include:\n\nShifting Confidence in AI Technologies: Confidence in voice assistants becoming as smart and reliable as humans decreased from\u00a073%\u00a0in March 2023 to\u00a060%\u00a0in June 2024. Trust in voice assistants to handle critical situations has also declined, with only\u00a043%\u00a0of U.S. consumers trusting them to call for help after an auto accident, down from\u00a050%\u00a0the previous year.\nGenerational Adoption Patterns: 63%\u00a0of Gen Z consumers reported using GenAI in the past 90 days, demonstrating the highest adoption rate. Familiarity with GenAI among baby boomers and seniors jumped from 23% to 41%.\nUse Cases and Potential for Reintegration: 47%\u00a0of consumers who used GenAI did so for quick information retrieval, indicating a key utility. Millennials and bridge millennials (both at\u00a032%) were the most frequent users of voice-activated devices.\n\nThe PYMNTS report underscores the enduring perceived value of GenAI for tasks like information retrieval, maintaining a stable 63% utility rating among U.S. consumers.\n The study highlights distinct generational patterns in technology adoption, with zillennials, as digital natives, showing a greater propensity to integrate new tools like GenAI into their daily lives. \nHowever, the observed decline in trust surrounding voice assistants serves as a lesson about the necessity of consistent performance and reliability for the sustained adoption of any novel technology. \nThe report\u2019s methodology involved a census-balanced survey of 2,721 U.S. consumers, with an oversampling of the zillennial generation to enable more in-depth analysis.\nThe post 63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers appeared first on PYMNTS.com.", "date_published": "2025-04-08T04:00:55-04:00", "date_modified": "2025-04-07T18:17:54-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/voice-assistant-GenAI.jpg", "tags": [ "AI", "artificial intelligence", "data point", "Featured News", "GenAI", "generative AI", "News", "PYMNTS Intelligence", "PYMNTS News", "The Data Point", "voice assistants", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2575539", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/shopify-ceo-tobias-lutke-employees-must-learn-to-use-ai-effectively/", "title": "Shopify CEO Tobias L\u00fctke: Employees Must Learn to Use AI Effectively", "content_html": "Shopify now considers the use of artificial intelligence (AI) by employees to be a \u201cbaseline expectation,\u201d CEO Tobias L\u00fctke said in an internal memo that he posted on X after learning it had been leaked.
\nUsing AI is critical at a time when merchants and entrepreneurs are leveraging the technology and when Shopify is tasked with making its software the best platform on which they can develop their businesses, L\u00fctke said in the memo.
\n\u201cWe do this by keeping everyone cutting edge and bringing all the best tools to bear so our merchants can be more successful than they themselves used to imagine,\u201d he said. \u201cFor that we need to be absolutely ahead.\u201d
\nL\u00fctke said in the post that he is using AI all the time and that he invited employees to tinker with the technology last summer, but that his statement at the time was \u201ctoo much of a suggestion.\u201d
\nNow, he said, he wants to change that perception because continuous improvement is expected of everyone at Shopify and AI can deliver necessary capabilities.
\n\u201cUsing AI effectively is now a fundamental expectation of everyone at Shopify,\u201d L\u00fctke said in the memo. \u201cIt\u2019s a tool of all trades today, and will only grow in importance.\u201d
\nL\u00fctke said in the memo that Shopify will add questions about AI usage to its performance and peer review questionnaire, that employees are expected to share what they learn about AI with their colleagues, and that teams who want to ask for more headcount and resources must demonstrate why AI cannot do what they need done.
\n\u201cWhat we need to succeed is our collective sum total skill and ambition at applying our craft, multiplied by AI, for the benefit of our merchants,\u201d L\u00fctke wrote in the memo.
\nEighty-two percent of workers across several industries who use generative AI (GenAI) at least weekly agree that it can increase productivity, according to the PYMNTS Intelligence report, \u201cWorkers Say Fears About GenAI Taking Their Jobs Is Overblown.\u201d
\nThe report also found that 50% of those who use GenAI weekly worry that the technology could eventually eliminate their specific job, compared to 24% of those who are unfamiliar with it.
\nThe post Shopify CEO Tobias L\u00fctke: Employees Must Learn to Use AI Effectively appeared first on PYMNTS.com.
\n", "content_text": "Shopify now considers the use of artificial intelligence (AI) by employees to be a \u201cbaseline expectation,\u201d CEO Tobias L\u00fctke said in an internal memo that he posted on X after learning it had been leaked.\nUsing AI is critical at a time when merchants and entrepreneurs are leveraging the technology and when Shopify is tasked with making its software the best platform on which they can develop their businesses, L\u00fctke said in the memo.\n\u201cWe do this by keeping everyone cutting edge and bringing all the best tools to bear so our merchants can be more successful than they themselves used to imagine,\u201d he said. \u201cFor that we need to be absolutely ahead.\u201d\nL\u00fctke said in the post that he is using AI all the time and that he invited employees to tinker with the technology last summer, but that his statement at the time was \u201ctoo much of a suggestion.\u201d\nNow, he said, he wants to change that perception because continuous improvement is expected of everyone at Shopify and AI can deliver necessary capabilities.\n\u201cUsing AI effectively is now a fundamental expectation of everyone at Shopify,\u201d L\u00fctke said in the memo. \u201cIt\u2019s a tool of all trades today, and will only grow in importance.\u201d\nL\u00fctke said in the memo that Shopify will add questions about AI usage to its performance and peer review questionnaire, that employees are expected to share what they learn about AI with their colleagues, and that teams who want to ask for more headcount and resources must demonstrate why AI cannot do what they need done.\n\u201cWhat we need to succeed is our collective sum total skill and ambition at applying our craft, multiplied by AI, for the benefit of our merchants,\u201d L\u00fctke wrote in the memo.\nEighty-two percent of workers across several industries who use generative AI (GenAI) at least weekly agree that it can increase productivity, according to the PYMNTS Intelligence report, \u201cWorkers Say Fears About GenAI Taking Their Jobs Is Overblown.\u201d\nThe report also found that 50% of those who use GenAI weekly worry that the technology could eventually eliminate their specific job, compared to 24% of those who are unfamiliar with it.\nThe post Shopify CEO Tobias L\u00fctke: Employees Must Learn to Use AI Effectively appeared first on PYMNTS.com.", "date_published": "2025-04-07T17:30:46-04:00", "date_modified": "2025-04-07T17:30:46-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Shopify-AI-1.jpg", "tags": [ "AI", "artificial intelligence", "ecommerce", "GenAI", "generative AI", "News", "PYMNTS News", "Retail", "shopify", "tobias lutke", "What's Hot", "artificial intelligence" ] }, { "id": "https://www.pymnts.com/?p=2570537", "url": "https://www.pymnts.com/artificial-intelligence-2/2025/ai-explained-whats-a-small-language-model-and-how-can-business-use-it/", "title": "AI Explained: What\u2019s a Small Language Model and How Can Business Use It?", "content_html": "Artificial intelligence (AI) is now a household word, thanks to the popularity of large language models like ChatGPT. These large models are trained on the whole internet and often have hundreds of billions of parameters \u2014 settings inside the model that help it guess what word comes next in a sequence. The more parameters, the more sophisticated the model.
\nA small language model (SLM) is a scaled-down version of a large-language model (LLM). It doesn\u2019t have as many parameters, but users may not need the extra power depending on the task at hand. As an analogy, people don\u2019t need a supercomputer to do basic word processing. They just need a regular PC.
\nBut while SLMs are smaller in size, they can still be powerful. In many cases, per IMB data, they are faster, cheaper and offer more control \u2014 key for companies looking to deploy powerful AI into their operations without breaking the bank.
\nLanguage models can have even trillions of parameters, such as OpenAI\u2019s GPT-4. In contrast, small language models typically have between a few million and a few billion parameters.
\nAccording to a January 2025 paper by Amazon researchers, SLMs in the range of 1 billion to 8 billion parameters performed just as well or even outperformed large models.
\nFor example, SLMs can outperform LLMs in certain domains because they are trained on specific industries. But LLMs do better in general knowledge.
\nSLMs also require far less computing power. They can be deployed on PCs, mobile devices or in company servers instead of the cloud. This makes them faster, cheaper and easier to fine-tune for specific business needs.
\nSee also: AI Explained: What Is a Large Language Model and Why Should Businesses Care?
\nSmall language models are quickly becoming popular among businesses that want the benefits of AI without the steep cost and complexity of LLMs.
\nThe following are advantages of SLMs over LLMs:
\n\u201cLower data and training requirements for SLMs can translate to fast turnaround times and expedited ROI,\u201d according to Intel.
\nDisadvantages of SLMs:
\nAs for hallucinations, since SLMs are built on smaller, more focused datasets, they\u2019re well suited for use in applications by industry. As such, \u201ctraining on a dataset that\u2019s built for a specific industry, field or company helps SLMs develop a deep and nuanced understanding that can lower the risk of erroneous outputs,\u201d according to Intel.
\nRead more: How AI Is Different From Web3, Blockchain and Crypto
\nThe most popular SLMs in the last two years \u201cby far\u201d have been those in Meta\u2019s open-source Llama 2 and 3 families, according to the Amazon research paper.
\nLlama 3 comes in 8 billion, 70 billion and 405 billion parameter models while Llama 2 has 7 billion, 13 billion, 34 billion and 70 billion versions. The SLMs would be the 8 billion model from Llama 3 and the 7 and 13 billion model from Llama 2. (Meta just released Llama 4 this week.)
\nNew entrant DeepSeek R1-1.5B offers 1.5 billion parameters as the first reasoning model from the Chinese AI startup.
\nOther SLMs include Google\u2019s Gemini Nano (1.8 billion and 3.25 billion parameter versions) and its Gemma family of open-source models. Last month, Google unveiled Gemma 3, which comes in 1, 4, 12 billion and 27 billion parameters.
\nLast October, French AI startup and OpenAI rival Mistral unveiled a new family of SLMs: Ministraux, at 3 and 8 billion parameters. Its first SLM is Mistral 7B, which has 7 billion parameters.
\nAnother notable SLM is Phi-2 from Microsoft. Despite only being 2.7 billion parameters, Phi-2 performs well in math, code, and reasoning tasks. It was trained using a carefully curated dataset, proving that smarter data selection can make even very small models capable.
\nCode repository Hugging Face has hundreds of open-source SLMs available for companies to use.
\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.
\nThe post AI Explained: What\u2019s a Small Language Model and How Can Business Use It? appeared first on PYMNTS.com.
\n", "content_text": "Artificial intelligence (AI) is now a household word, thanks to the popularity of large language models like ChatGPT. These large models are trained on the whole internet and often have hundreds of billions of parameters \u2014 settings inside the model that help it guess what word comes next in a sequence. The more parameters, the more sophisticated the model.\nA small language model (SLM) is a scaled-down version of a large-language model (LLM). It doesn\u2019t have as many parameters, but users may not need the extra power depending on the task at hand. As an analogy, people don\u2019t need a supercomputer to do basic word processing. They just need a regular PC.\nBut while SLMs are smaller in size, they can still be powerful. In many cases, per IMB data, they are faster, cheaper and offer more control \u2014 key for companies looking to deploy powerful AI into their operations without breaking the bank.\nLanguage models can have even trillions of parameters, such as OpenAI\u2019s GPT-4. In contrast, small language models typically have between a few million and a few billion parameters.\nAccording to a January 2025 paper by Amazon researchers, SLMs in the range of 1 billion to 8 billion parameters performed just as well or even outperformed large models.\nFor example, SLMs can outperform LLMs in certain domains because they are trained on specific industries. But LLMs do better in general knowledge.\nSLMs also require far less computing power. They can be deployed on PCs, mobile devices or in company servers instead of the cloud. This makes them faster, cheaper and easier to fine-tune for specific business needs.\nSee also: AI Explained: What Is a Large Language Model and Why Should Businesses Care?\nAdvantages and Disadvantages of SLMs\nSmall language models are quickly becoming popular among businesses that want the benefits of AI without the steep cost and complexity of LLMs.\nThe following are advantages of SLMs over LLMs:\n\nCost efficiency: Large language models are expensive to run, especially at scale. Small models, on the other hand, can operate on personal computers or devices like smartphones and IoT sensors. Using SLMs along with LLMs for more critical and complex tasks can keep AI costs down.\nData privacy and control: When using an LLM, which means sending data to the cloud, there is always a privacy concern. Small models can be deployed entirely on premises, meaning companies retain full control over their data and workflows. This is especially important in regulated industries like finance and healthcare.\nSpeed and responsiveness: Because they are lighter, small models deliver responses more quickly and can operate with less latency. This is particularly valuable in real-time settings such as customer service chatbots.\n\n\u201cLower data and training requirements for SLMs can translate to fast turnaround times and expedited ROI,\u201d according to Intel.\nDisadvantages of SLMs:\n\nBias learned from LLMs: Since smaller models are truncated versions of large models, bias in the parent model can be passed on.\nLower performance on complex tasks: Since they\u2019re not as robust as the large models, they might be less proficient in complicated tasks that require knowledge in a comprehensive range of topics.\nNot great at general tasks: SLMs tend to be more specialized so they are not as good as LLMs in general tasks.\n\nAs for hallucinations, since SLMs are built on smaller, more focused datasets, they\u2019re well suited for use in applications by industry. As such, \u201ctraining on a dataset that\u2019s built for a specific industry, field or company helps SLMs develop a deep and nuanced understanding that can lower the risk of erroneous outputs,\u201d according to Intel.\nRead more: How AI Is Different From Web3, Blockchain and Crypto\nMeta\u2019s Llama Leads by a Mile\nThe most popular SLMs in the last two years \u201cby far\u201d have been those in Meta\u2019s open-source Llama 2 and 3 families, according to the Amazon research paper.\nLlama 3 comes in 8 billion, 70 billion and 405 billion parameter models while Llama 2 has 7 billion, 13 billion, 34 billion and 70 billion versions. The SLMs would be the 8 billion model from Llama 3 and the 7 and 13 billion model from Llama 2. (Meta just released Llama 4 this week.)\nNew entrant DeepSeek R1-1.5B offers 1.5 billion parameters as the first reasoning model from the Chinese AI startup.\nOther SLMs include Google\u2019s Gemini Nano (1.8 billion and 3.25 billion parameter versions) and its Gemma family of open-source models. Last month, Google unveiled Gemma 3, which comes in 1, 4, 12 billion and 27 billion parameters.\nLast October, French AI startup and OpenAI rival Mistral unveiled a new family of SLMs: Ministraux, at 3 and 8 billion parameters. Its first SLM is Mistral 7B, which has 7 billion parameters.\nAnother notable SLM is Phi-2 from Microsoft. Despite only being 2.7 billion parameters, Phi-2 performs well in math, code, and reasoning tasks. It was trained using a carefully curated dataset, proving that smarter data selection can make even very small models capable.\nCode repository Hugging Face has hundreds of open-source SLMs available for companies to use.\n\nFor all PYMNTS AI coverage, subscribe to the daily\u00a0AI\u00a0Newsletter.\n\nThe post AI Explained: What\u2019s a Small Language Model and How Can Business Use It? appeared first on PYMNTS.com.", "date_published": "2025-04-07T16:19:12-04:00", "date_modified": "2025-04-07T16:19:12-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/f05cc0fdcc9e387e4f3570c17158c503?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/AI-small-language-model.png", "tags": [ "AI", "artificial intelligence", "chatbots", "ChatGPT", "DeepSeek", "Google", "large language models", "LLMs", "Meta", "Microsoft", "Mistral", "News", "OpenAI", "PYMNTS News", "SLMs", "small language models", "Technology", "artificial intelligence" ] } ] }