artificial intelligence Archives | PYMNTS.com https://www.pymnts.com/artificial-intelligence-2/2025/sam-altman-openai-has-reached-roughly-800-million-users/ What's next in payments and commerce Mon, 14 Apr 2025 02:37:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.pymnts.com/wp-content/uploads/2022/11/cropped-PYMNTS-Icon-512x512-1.png?w=32 artificial intelligence Archives | PYMNTS.com https://www.pymnts.com/artificial-intelligence-2/2025/sam-altman-openai-has-reached-roughly-800-million-users/ 32 32 225068944 Sam Altman: OpenAI Has Reached Roughly 800 Million Users https://www.pymnts.com/artificial-intelligence-2/2025/sam-altman-openai-has-reached-roughly-800-million-users/ Mon, 14 Apr 2025 00:13:31 +0000 https://www.pymnts.com/?p=2683559 OpenAI’s CEO says the generative artificial intelligence (AI) startup has reached approximately 800 million people. “Something like 10% of the world uses our systems, now a lot,” said Sam Altman, whose comments at a Friday (April 11) TED 2025 event were reported by Seeking Alpha.  Host Chris Anderson pointed out that Altman had said his […]

The post Sam Altman: OpenAI Has Reached Roughly 800 Million Users appeared first on PYMNTS.com.

]]>
OpenAI’s CEO says the generative artificial intelligence (AI) startup has reached approximately 800 million people.

“Something like 10% of the world uses our systems, now a lot,” said Sam Altman, whose comments at a Friday (April 11) TED 2025 event were reported by Seeking Alpha. 

Host Chris Anderson pointed out that Altman had said his company’s user base was growing rapidly, doubling in a “just a few weeks.”

The report noted that OpenAI’s growth has been helped along by viral features like the ability to generate images and videos in a range of styles, such as that of legendary Japanese animation studio, Studio Ghibli.

Last month, Altman said the company, maker of ChatGPT, had added a million users in one hour. Asked during the TED event if the company had considered compensating artists for creating works in their style, Altman said there could be prompts that could trigger payments for specific artists.

“I think it would be cool to figure out a new model where if you say, ‘I want to do it in the name of this artist,’ and they opt in, there’s a revenue model there,” Altman said.

Altman added the company had guidelines to prevent the AI model from generating images in the styles of specific artists or creators. He also discussed the company’s work on AI agents, models that can operate autonomously on behalf of users.

In other AI news, PYMNTS wrote last week about ways the technology can help companies hoping to alleviate the cost of new tariffs. While those levies will eat into the bottom line of many businesses, AI can help reduce costs while ensuring productivity stays up.

Research by PYMNTS Intelligence has shown that 82% of workers who use generative AI at least weekly say it increases productivity, even though half of these workers also worry that AI would replace them at their jobs.

“AI can also facilitate material selection by assessing availability, compliance and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,” said Tarun Chandrasekhar, president and CPO at Syndigo.

Still, Pierre Laprée, chief product officer of SpendHQ, told PYMNTS that while AI has a part to play, it’s “misguided” to believe that AI will automatically offset rising costs from shifts in trade policy.

“Tariffs are complex, and so is procurement,” he said. “You need more than an algorithm — you need clean, structured, specific data. Without that, AI won’t reduce risk. It will amplify it.”

 

The post Sam Altman: OpenAI Has Reached Roughly 800 Million Users appeared first on PYMNTS.com.

]]>
2683559
OpenAI Co-Founder’s Firm Value Jumps Sixfold After $6 Billion Funding Round https://www.pymnts.com/artificial-intelligence-2/2025/safe-superintelligences-value-jumps-sixfold-after-6-billion-funding-round/ Sun, 13 Apr 2025 21:27:26 +0000 https://www.pymnts.com/?p=2683482 Artificial intelligence (AI) startup Safe Superintelligence (SSI) has reportedly raised $6 billion in new funding. The round, as reported Friday (April 11) by the Financial Times (FT), values the company at $32 billion, a more than sixfold increase from the last time the firm raised money. It’s the latest sign for continued investor enthusiasm in […]

The post OpenAI Co-Founder’s Firm Value Jumps Sixfold After $6 Billion Funding Round appeared first on PYMNTS.com.

]]>
Artificial intelligence (AI) startup Safe Superintelligence (SSI) has reportedly raised $6 billion in new funding.

The round, as reported Friday (April 11) by the Financial Times (FT), values the company at $32 billion, a more than sixfold increase from the last time the firm raised money.

It’s the latest sign for continued investor enthusiasm in AI companies, even if — as the FT report noted — the company in question does not yet have a product.

The company has not provided much info on how it plans to overtake the likes of OpenAI and Anthropic, but co-founder Ilya Sutskever told the FT last year that he and his team had “identified a new mountain to climb that’s a bit different from what I was working on previously.”

Sources told the news outlet that SSI has been closed-mouthed even with its backers, though three sources close to the company said SSI was working on “unique ways” of building and scaling AI models.

A separate report from Reuters — also citing unnamed sources — said that Google and Nvidia were among the investors in this round. PYMNTS has contacted SSI for comment but has not yet gotten a reply.

SSI was launched last year by Sutskever — former chief scientist at OpenAI — Apple AI vet Daniel Gross and AI researcher Daniel Levy.

In the fall of 2023, Sutskever was involved in an unsuccessful attempt to oust OpenAI CEO Sam Altman from his position. Sutskever announced he would leave the company in May, apparently on good terms with Altman.

Still, SSI has taken a different approach than OpenAI, PYMNTS wrote last year, focusing solely on developing safe superintelligence without pressure from commercial interests.

“This has reignited the debate over the possibility of achieving such a feat, with some experts questioning the feasibility of creating a superintelligent AI, given the current limitations of AI systems and the challenges in ensuring its safety,” that report said.

“Critics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding.”

These critics contend that the jump from narrow AI, which excels at specific tasks, to a general intelligence that exceeds human capabilities requires more than just increasing computational power or data.

Skeptics also argue that the challenges involved in creating a safe superintelligence could be insurmountable, due to humanity’s understanding of AI and the technology’s limitations.

The post OpenAI Co-Founder’s Firm Value Jumps Sixfold After $6 Billion Funding Round appeared first on PYMNTS.com.

]]>
2683482
Salesforce Bets Big on Enterprise AI to Drive Growth https://www.pymnts.com/artificial-intelligence-2/2025/salesforce-svp-enterprise-ai-driving-growth-in-data-cloud-business/ https://www.pymnts.com/artificial-intelligence-2/2025/salesforce-svp-enterprise-ai-driving-growth-in-data-cloud-business/#comments Thu, 10 Apr 2025 16:05:46 +0000 https://www.pymnts.com/?p=2588175 Customer relationship management (CRM) giant Salesforce is seeing “massive” growth in its data cloud platform, fueled by interest in generative and agentic artificial intelligence (AI) from enterprises, according to one of the company’s senior executives. In an exclusive interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said the company is […]

The post Salesforce Bets Big on Enterprise AI to Drive Growth appeared first on PYMNTS.com.

]]>
Customer relationship management (CRM) giant Salesforce is seeing “massive” growth in its data cloud platform, fueled by interest in generative and agentic artificial intelligence (AI) from enterprises, according to one of the company’s senior executives.

In an exclusive interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said the company is seeing robust demand for its data cloud services as enterprises realize they have to better organize their data to fully tap the power of AI.

“We make enterprise data ready for the agentic era,” Tao said, adding that the platform has seen strong growth in recent years. “We’re very excited to keep going and make agentic experiences world class.”

In the fiscal year ended Jan. 31, Salesforce Data Cloud booked $900 million in revenue, up 120% year over year. The company said the platform is used by nearly half of Fortune 100 companies. In Q4 alone, all of its top 10 deals included both AI and data cloud components.

Why all this interest in data alongside AI?

AI is fueled by data. Without data, AI is useless. AI is trained by data. It analyzes, makes predictions and learns new capabilities using data. It ingests text, images, videos, audio, code, sensor readings, math and other types of data — whether they reside in PDFs, emails, social media posts, spreadsheets, CRM systems, HR databases and many more media.

But in most companies, data is not organized well. It is in many places, batches are closed off from each other. Before data is ready for AI, it must be cleaned, unified, well-structured, governed and — ideally — real-time and searchable. Only then can AI be accurate, relevant and safe.

“Whenever companies say they have ‘unified’ data, a lot of times what that means is they’ve centralized the storage,” Tao said. The reality is that in many cases, “they’re not able to unlock the value of the data in real time for business applications and business agents.”

According to a 2024 Harvard Business Review article, senior managers often complain “they don’t have data they really need and don’t trust what they have. … The hype around AI exacerbates those concerns.”

Salesforce has been tackling this unglamorous but critical problem. Tao joined the company in 2019 to solve this problem by creating Data Cloud.

According to Gartner Peer Insights, the platform was given four out of five stars among 119 companies that have used it. However, in 2024, one 2-star review cited cost as a concern and a handful of 3-star reviews cited lack of good documentation, limited features and integration issues.

Salesforce’s Data Cloud competitors include Adobe Real-Time CDP, Oracle Unity Customer Data Platform, among others.

Read more: Salesforce to Launch AI Agents and Cloud-Based POS for Retailers

Down and Dirty With Data

To be sure, problems with organizational data have been around for decades. Many vendors offer to organize an enterprise’s data, such as Snowflake, Databricks and the cloud computing giants. They offer to unify data into data lakes or data warehouses.

Since the problem is not new, many companies are actually in some phase of data organization, Tao said. But unifying data remains a big lift for many.

“Probably 97% of the customers I talk to today, they have some form of data lake, data warehouse. Universally, that is pretty much true,” Tao said. “At the same time, the data is quite trapped in those places.”

That’s because they were built for analysts and data scientists, she said. For business users, “there is very limited ability to tap into the vastness of all that data.”

“How does that translate to something that makes sense for … the customer-facing agent performing the sales function?” Tao asked. She said Salesforce set out to make it accessible to business users.

Originally conceived as a customer data platform (CDP) in 2019 focused on marketing use cases, Salesforce Data Cloud has transformed into what Tao calls a “universal data activation layer” for all Salesforce applications, spanning sales, service, commerce, analytics and AI.

Unlike traditional data platforms that duplicate or copy data through complex extract-transform-load (ETL) processes, data cloud uses a “zero copy” architecturesomething the company pioneered, according to Tao.

Zero copy pulls in metadata from disparate data sources without physically moving the data, allowing companies to harmonize, govern and activate their data in place. That means enterprises don’t need to unravel what they’ve built, Tao explained.

Tao said the result is a real-time, unified view of the customer that both human and AI agent workers can access without having to replicate or recode business logic across hundreds of systems. Since the AI uses the company’s own data, it mitigates hallucinations as well, Tao said.

Salesforce also offers a governance layer that lets companies do things like set access permissions for both human and AI agent workers. For example, companies can specify what data an entry-level sales representative can see and which objects a chatbot can access.

This ability will be key when using multilayers of AI agents, she said.

The post Salesforce Bets Big on Enterprise AI to Drive Growth appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/salesforce-svp-enterprise-ai-driving-growth-in-data-cloud-business/feed/ 5 2588175
Anthropic Debuts $200-per-Month Subscription to Claude AI Model https://www.pymnts.com/artificial-intelligence-2/2025/anthropic-debuts-200-per-month-subscription-to-claude-ai-model/ https://www.pymnts.com/artificial-intelligence-2/2025/anthropic-debuts-200-per-month-subscription-to-claude-ai-model/#comments Wed, 09 Apr 2025 18:41:01 +0000 https://www.pymnts.com/?p=2681036 Anthropic has debuted a higher-priced subscription version of its Claude artificial intelligence (AI) chatbot. The startup’s Max plan, announced Wednesday (April 9), is designed for users who work extensively with Claude and require expanded access for their most important tasks. This plan also gives users priority access to Anthropic’s newest features and models. “The top request from our […]

The post Anthropic Debuts $200-per-Month Subscription to Claude AI Model appeared first on PYMNTS.com.

]]>
Anthropic has debuted a higher-priced subscription version of its Claude artificial intelligence (AI) chatbot.

The startup’s Max plan, announced Wednesday (April 9), is designed for users who work extensively with Claude and require expanded access for their most important tasks. This plan also gives users priority access to Anthropic’s newest features and models.

“The top request from our most active users has been expanded Claude access,” the announcement said. “The new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.”

According to Anthropic, the Max plan lets users choose from two levels: “Expanded Usage,” a plan priced at $100 per month and designed for frequent users who work with Claude on a variety of tasks; and Maximum Flexibility, costing $200 per month, for daily users who collaborate often with Claude for most tasks.

“More usage unlocks more possibilities to collaborate with Claude, whether for work or life,” the company said.

“At work, it means you can build Projects around any work task—whether it be writing, software, or data analysis—and really collaborate with Claude until your outcomes are just right.”

Anthropic rival artificial intelligence startup OpenAI debuted a $200-per-month “research grade” version of its ChatGPT tool in December.

Last week, Anthropic rolled out Claude for Education, which lets universities come up with and implement AI-powered approaches to teaching, learning and administration.

The company said recently that it was more interested in developing generalist foundation AI models for enterprise users than for building hardware or consumer entertainment offerings.

Speaking at the HumanX conference in March, Anthropic Chief Product Officer Mike Krieger said the company was focused on balancing research breakthroughs and product development.

“We want to help people get work done, whether it’s code, whether it’s knowledge work, etc.,” he said. “And then you can then imagine different manifestations of that” in applications for the consumer, small business and up to large corporations and the C-suite.

Also Wednesday, PYMNTS spoke with AI experts on Meta’s newest Llama AI model, which features a context window of up to 10 million tokens — or around 7.5 million words — almost 10 times the amount of Google’s Gemini 2.5.

Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5’s 1 million context window when Llama 4 launched.

“This is an enormous number. With 17 billion active parameters, we get a ‘mini’ level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,” Badeev said. “With enough context, Llama 4 Scout’s performance on specific applied tasks could be significantly better than many state-of-the-art models.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Anthropic Debuts $200-per-Month Subscription to Claude AI Model appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/anthropic-debuts-200-per-month-subscription-to-claude-ai-model/feed/ 2 2681036
Meta’s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say https://www.pymnts.com/artificial-intelligence-2/2025/metas-llama-4-models-are-bad-for-rivals-but-good-for-enterprises-experts-say/ https://www.pymnts.com/artificial-intelligence-2/2025/metas-llama-4-models-are-bad-for-rivals-but-good-for-enterprises-experts-say/#comments Wed, 09 Apr 2025 13:00:34 +0000 https://www.pymnts.com/?p=2675714 Meta’s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others. But it’s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts. The social media giant has released two models from its Llama […]

The post Meta’s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say appeared first on PYMNTS.com.

]]>
Meta’s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others.

But it’s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts.

The social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta’s first natively multimodal models — meaning they were built from the ground up to handle text and images; these capabilities were not bolted on.

Llama 4 Scout’s unique proposition: It has a context window of up to 10 million tokens, which translates to around 7.5 million words. The record holder to date is Google’s Gemini 2.5 — at 1 million and going to 2.

The bigger the context window — the area where users enter the prompt — the more data and documents one can upload to the AI chatbot.

Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5’s 1 million context window when Llama 4 Scout comes along with 10 million.

“This is an enormous number. With 17 billion active parameters, we get a ‘mini’ level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,” Badeev said. “With enough context, Llama 4 Scout’s performance on specific applied tasks could be significantly better than many state-of-the-art models.”

Read more: Meta Adds ‘Multimodal’ Models to Its Llama AI Stable

Only 1 Nvidia H100 Host Needed

Both Llama 4 Scout and Maverick have 17 billion active parameters, meaning the number of settings that are activated at one time. In total, however, Scout has 109 billion parameters and Maverick has 400 billion.

Meta also said Llama 4 Maverick is cheaper to run: between 19 and 49 cents per million tokens for input (query) and output (response); it runs on one Nvidia H100 DGX server.

The pricing compares with $4.38 for OpenAI’s GPT-4o. Gemini 2.0 Flash costs 17 cents per million tokens while DeepSeek v3.1 costs 48 cents. (While Meta is not in the business of selling AI services, it still seeks to minimize AI costs for itself.)

“One of the biggest blockers to deploying AI has been cost,” Chintan Mota, director of enterprise technology at Wipro, told PYMNTS. “The infrastructure, the inference, the lock-in — it all adds up.”

However, open-source models like Llama 4, DeepSeek and others are enabling companies to build a model fine-tuned to their businesses, trained on their own data and running in their environment, Mota said. “You’re not stuck waiting for a Gemini or (OpenAI’s) GPT feature release. You have more control over your own data and its security.”

Meta’s open-source Llama family will “put pressure on closed models like Gemini. Not because Llama is better, but because it’s good enough,” Mota added. “For 80% of business use cases — automating reports, building internal copilots, summarizing knowledge bases — ‘good enough’ and affordable beats ‘perfect’ and pricey.”

Read more: Musk’s Grok 3 Takes Aim at Perplexity, OpenAI

Fewer Filters, Just Like Grok

Llama 4 Scout and Maverick have a mixture-of-experts (MoE) architecture — meaning they don’t activate all the “expert” bots for all tasks. Instead, they pick and choose the right ones — for speed and to save money.

They were pre-trained on 200 languages, half with over 1 billion tokens each. Meta said this is 10 times more multilingual tokens than Llama 3.

Meta said Scout and Maverick were taught by Llama 4 Behemoth, a 2-trillion-parameter model that’s still in training. It is in preview.

“The three Llama 4 models are geared toward reasoning, coding, and step-by-step problem-solving. However, they do not appear to exhibit the deeper chain-of-thought behavior seen in specialized reasoning models like OpenAI’s ‘o’ series or DeepSeek R1,” Rogers Jeffrey Leo John, co-founder and CTO of DataChat, told PYMNTS.

“Still, despite not being the absolute best model available, LLama 4 outperforms several leading closed-source alternatives on various benchmarks,” John added.

Finally, Meta said it made Llama 4 less prone to punting questions it deems too sensitive — to be more “comparable to Grok,” the AI model from Elon Musk’s AI startup, xAI. The latest version, Grok 3, is designed to “relentlessly seek the truth,” according to xAI.

According to Meta, “our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.” It is less censorious than Llama 3.

For example, Llama 4 refuses to answer queries related to debated political and social topics less than 2% of the time, compared with 7% for Llama 3.3. Meta claims that Llama 4 is more “balanced” in choosing which prompts not to answer and is getting better at staying politically neutral.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Meta’s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/metas-llama-4-models-are-bad-for-rivals-but-good-for-enterprises-experts-say/feed/ 3 2675714
5 Ways AI Can Help Mitigate the Impact of Tariffs on Business https://www.pymnts.com/artificial-intelligence-2/2025/5-ways-ai-can-help-mitigate-the-impact-of-tariffs-on-business/ https://www.pymnts.com/artificial-intelligence-2/2025/5-ways-ai-can-help-mitigate-the-impact-of-tariffs-on-business/#comments Tue, 08 Apr 2025 23:13:41 +0000 https://www.pymnts.com/?p=2633804 The Trump tariffs are continuing to roil the business world, plunging the U.S. stock market dangerously close to bear territory. External forces like trade policies mean companies have limited leeway on how to protect themselves. “Tariffs, like any crisis, are extremely dynamic — and the latest round that imposed tariffs on all U.S. importers is […]

The post 5 Ways AI Can Help Mitigate the Impact of Tariffs on Business appeared first on PYMNTS.com.

]]>
The Trump tariffs are continuing to roil the business world, plunging the U.S. stock market dangerously close to bear territory. External forces like trade policies mean companies have limited leeway on how to protect themselves.

“Tariffs, like any crisis, are extremely dynamic — and the latest round that imposed tariffs on all U.S. importers is a perfect example,” Leagh Turner, CEO of Coupa Software, told PYMNTS.

“They impact businesses in different ways depending on their country, product type and trade relationships. That makes it difficult for leaders to predict the full impact to their business.”

But artificial intelligence (AI) can help, despite the daily turbulence. A Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility, according to Stephan Liozu, chief value officer.

Before diving into AI solutions, companies must first assess the following, according to Praful Saklani, CEO of Pramata.

  • Assess areas of the business potentially exposed to higher costs.
  • Determine the scale of the exposure by contract and relationship.
  • Understand the tools and strategies at one’s disposal.

Here are ways companies can use AI to navigate tariffs or cut costs, according to executives.

1. Use AI to monitor and understand shifting tariff policies in real time, allowing businesses to pivot more quickly.

“AI-powered trade policy monitoring scans government announcements and regulatory updates to forecast potential tariff shifts,” Tarun Chandrasekhar, president and CPO at Syndigo, told PYMNTS.

He added that historical analysis of past trade policies and macroeconomic trends identifies patterns that can give brands insight into how possible future tariff increases or decreases could impact them, such as how tariffs on specific materials affected sales of certain clothing items.

2. Use AI to find new sources for raw materials and other supplies.

“AI can also facilitate material selection by assessing availability, compliance, and cost implications, which helps brands find substitute materials when needed without compromising on quality or compliance with regulatory standards,” Chandrasekhar said.

Various AI models optimize for price, quality, and time delivery, while minimizing the disruption for your own customers, said Vaclav Vincalek, CTO at Hiswai, in comments to PYMNTS.

Chandrasekhar said AI can make tariff classification and compliance “significantly” easier as well, to avoid penalties and overpayment issues. Automated classification systems can scan product attributes to assign correct harmonized system codes, minimizing the risk of misclassification.

Read more: 82% of US Workforce Believes GenAI Boosts Productivity

3. Improve supplier resiliency and scenario planning with AI.

“Tap into buyer-supplier networks to run different scenarios to find near-shore or offshore suppliers, negotiate terms and reroute supply chains — rapidly. Essentially, enabling the quick pivot,” Turner said.

She also recommended optimizing operations to improve on-hand inventory and cash. The availability of planning and forecasting tools means companies can compare supplier pricing, data and risks to optimize inventory, giving them a cushion, Turner said.

It’s also important to assume total control over the spending lifecycle. Having insights and a holistic view of spending and suppliers sets up companies to make agile changes in the supply chain, she added.

4. AI can help increase efficiency, reduce costs and raise worker productivity.

Tariffs will cut into the bottom line of many companies but AI can help keep costs down while ensuring productivity stays up.

According to a January 2025 PYMNTS Intelligence report, “GenAI: A Generational Look at AI Usage and Attitudes,” 82% of workers who use generative AI at least weekly say it increases productivity. But half of these workers also worry that AI would take their jobs.

As for cost reduction, Shopify CEO Tobi Lutke is looking to save money by using AI instead of hiring more workers.

In a memo to employees he posted on X, Lutke wrote that “before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.”

5. Be realistic about how much AI can help and use other strategies as well.

Pierre Laprée, chief product officer of SpendHQ, told PYMNTS that while AI has a role to play, it’s “misguided” to believe that AI will automatically offset rising costs from trade policy shifts.

“Tariffs are complex, and so is procurement. You need more than an algorithm — you need clean, structured, specific data. Without that, AI won’t reduce risk. It will amplify it,” Laprée said.

Paul Magel, president of the supply chain tech division at CGS, agreed. He told PYMNTS that the data feeding into the AI systems must be clean and accurate for it to work optimally. “AI is not a panacea,” Magel said. “It’s incredibly helpful but requires the right approach to be effective.”

The post 5 Ways AI Can Help Mitigate the Impact of Tariffs on Business appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/5-ways-ai-can-help-mitigate-the-impact-of-tariffs-on-business/feed/ 1 2633804
AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups https://www.pymnts.com/artificial-intelligence-2/2025/ai-startups-signalfire-raises-1-billion-to-invest-in-ai-startups/ Tue, 08 Apr 2025 16:00:24 +0000 https://www.pymnts.com/?p=2610066 Venture capital firm SignalFire has raised more than $1 billion in new capital to invest in artificial intelligence (AI) startups. It now has a total of $3 billion under management, according to a Monday (April 7) news release. The funding will go toward seed to Series B investments in applied AI. The capital will be […]

The post AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups appeared first on PYMNTS.com.

]]>
Venture capital firm SignalFire has raised more than $1 billion in new capital to invest in artificial intelligence (AI) startups.

It now has a total of $3 billion under management, according to a Monday (April 7) news release.

The funding will go toward seed to Series B investments in applied AI. The capital will be deployed through the firm’s Seed, Early, Executive-in-Resident (XIR) and Opportunities funds, per the release.

SignalFire uses AI and data to find and develop high-growth startups. Through its Beacon AI technology, the firm analyzes data from 650 million professionals and 80 million organizations to guide investment and operational decisions.

Beacon AI spots market and talent trends to help SignalFire investors and portfolio companies build their teams and products, the company said.

Unlike traditional VCs adapting to AI, SignalFire was built from the ground up with AI in its DNA. As such, the firm said it could spot breakthrough startups “earlier” and accelerate company growth.

“AI’s next frontier isn’t invention, it’s implementation,” SignalFire partner Wayne Hu said in the release. “With these funds, we’ll continue to back founders who transform theoretical AI technology into market-changing solutions.”

Read more: VC Investors Shrink as Money Goes to Big Tech Startups

AI Infrastructure Startup Nexthop AI Raises $110 Million

Nexthop AI, a startup developing advanced networking solutions for cloud clusters, has emerged from stealth with $110 million in funding.

The round was led by Lightspeed Venture Partners, with backing from Kleiner Perkins, WestBridge, Battery Ventures and Emergent Ventures, according to a press release. The funds will accelerate product development tailored to meet the growing demands of AI training and inference.

Hyperscalers — cloud computing giants — are investing billions annually in their GPU and networking infrastructure. They also require highly optimized software and hardware infrastructure attuned to data center build outs, the startup said.

“The world’s largest cloud providers need a new generation of networking capabilities to keep pace with the demands of AI workloads,” Guru Chahal, partner at Lightspeed Venture Partners, said in the announcement. “Nexthop AI is filling a critical gap in this $35 billion market with its deep domain expertise, pioneering technology and customized solutions.”

The company partners with cloud providers, acting as an extension of their engineering teams to deliver scalable, power-efficient artificial intelligence infrastructure.

Read also: Nvidia and xAI Sign On to $30 Billion AI Infrastructure Fund

OpenAI Opens Free AI Academy

OpenAI has launched OpenAI Academy, a free learning hub for all things AI. The education website is open to all, from all types of backgrounds.

The site provides videos, tutorials and other content. People can meet virtually or in person to learn, network and collaborate.

Topics include “ChatGPT for Data Analysis,” “Advanced Prompt Engineering,” and “Collaborating with AI: Group Work and Projects Simplified.”

There are also tutorials for AI in education, use of AI in personal life and how to use the company’s video generator Sora, as well as developer courses.

OpenAI is not offering certification or accreditation at this time. All lessons are in English, with more languages to come.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post AI Startups: SignalFire Raises $1 Billion to Invest in AI Startups appeared first on PYMNTS.com.

]]>
2610066
63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers https://www.pymnts.com/artificial-intelligence-2/2025/63-percent-adoption-rate-shows-genai-enthusiasm-among-younger-consumers/ Tue, 08 Apr 2025 08:00:55 +0000 https://www.pymnts.com/?p=2576305 Generative artificial intelligence (GenAI) is gaining traction among U.S. consumers across all age groups, according to a new PYMNTS Intelligence report, even as enthusiasm cools for voice assistants. The study, titled “GenAI and Voice Assistants: Adoption and Trust Across Generations,” reveals changing consumer preferences concerning GenAI indicating a shift toward newer technologies while trust in […]

The post 63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers appeared first on PYMNTS.com.

]]>
Generative artificial intelligence (GenAI) is gaining traction among U.S. consumers across all age groups, according to a new PYMNTS Intelligence report, even as enthusiasm cools for voice assistants.

The study, titled “GenAI and Voice Assistants: Adoption and Trust Across Generations,” reveals changing consumer preferences concerning GenAI indicating a shift toward newer technologies while trust in older voice assistant technology declines. This trend, ironically observed among the initial proponents of voice assistants, prompts questions about the enduring appeal of technologies in a rapidly innovating landscape.

The PYMNTS special report, which surveyed 2,721 U.S. consumers between June 5 and June 21, 2024, examined consumer habits and opinions regarding both voice assistants and GenAI. The findings highlight a decrease in consumer confidence concerning the future capabilities of voice assistants, with the percentage believing they will become as smart and reliable as humans falling from 73% in March 2023 to 60% in June 2024. This erosion of trust is particularly pronounced among millennials and bridge millennials.

Concurrently, GenAI has experienced significant adoption, with 34% of U.S. consumers having used it in the 90 days preceding the survey. The report suggests that the perceived lack of advancement and dependability in voice assistants is contributing to this change, particularly as consumers hold high expectations for technological progress.

 

Key data points from the report include:

  • Shifting Confidence in AI Technologies: Confidence in voice assistants becoming as smart and reliable as humans decreased from 73% in March 2023 to 60% in June 2024. Trust in voice assistants to handle critical situations has also declined, with only 43% of U.S. consumers trusting them to call for help after an auto accident, down from 50% the previous year.
  • Generational Adoption Patterns: 63% of Gen Z consumers reported using GenAI in the past 90 days, demonstrating the highest adoption rate. Familiarity with GenAI among baby boomers and seniors jumped from 23% to 41%.
  • Use Cases and Potential for Reintegration: 47% of consumers who used GenAI did so for quick information retrieval, indicating a key utility. Millennials and bridge millennials (both at 32%) were the most frequent users of voice-activated devices.

The PYMNTS report underscores the enduring perceived value of GenAI for tasks like information retrieval, maintaining a stable 63% utility rating among U.S. consumers.

The study highlights distinct generational patterns in technology adoption, with zillennials, as digital natives, showing a greater propensity to integrate new tools like GenAI into their daily lives.

However, the observed decline in trust surrounding voice assistants serves as a lesson about the necessity of consistent performance and reliability for the sustained adoption of any novel technology.

The report’s methodology involved a census-balanced survey of 2,721 U.S. consumers, with an oversampling of the zillennial generation to enable more in-depth analysis.

The post 63% Adoption Rate Shows GenAI Enthusiasm Among Younger Consumers appeared first on PYMNTS.com.

]]>
2576305
Shopify CEO Tobias Lütke: Employees Must Learn to Use AI Effectively https://www.pymnts.com/artificial-intelligence-2/2025/shopify-ceo-tobias-lutke-employees-must-learn-to-use-ai-effectively/ https://www.pymnts.com/artificial-intelligence-2/2025/shopify-ceo-tobias-lutke-employees-must-learn-to-use-ai-effectively/#comments Mon, 07 Apr 2025 21:30:46 +0000 https://www.pymnts.com/?p=2575539 Shopify now considers the use of artificial intelligence (AI) by employees to be a “baseline expectation,” CEO Tobias Lütke said in an internal memo that he posted on X after learning it had been leaked. Using AI is critical at a time when merchants and entrepreneurs are leveraging the technology and when Shopify is tasked […]

The post Shopify CEO Tobias Lütke: Employees Must Learn to Use AI Effectively appeared first on PYMNTS.com.

]]>
Shopify now considers the use of artificial intelligence (AI) by employees to be a “baseline expectation,” CEO Tobias Lütke said in an internal memo that he posted on X after learning it had been leaked.

Using AI is critical at a time when merchants and entrepreneurs are leveraging the technology and when Shopify is tasked with making its software the best platform on which they can develop their businesses, Lütke said in the memo.

“We do this by keeping everyone cutting edge and bringing all the best tools to bear so our merchants can be more successful than they themselves used to imagine,” he said. “For that we need to be absolutely ahead.”

Lütke said in the post that he is using AI all the time and that he invited employees to tinker with the technology last summer, but that his statement at the time was “too much of a suggestion.”

Now, he said, he wants to change that perception because continuous improvement is expected of everyone at Shopify and AI can deliver necessary capabilities.

“Using AI effectively is now a fundamental expectation of everyone at Shopify,” Lütke said in the memo. “It’s a tool of all trades today, and will only grow in importance.”

Lütke said in the memo that Shopify will add questions about AI usage to its performance and peer review questionnaire, that employees are expected to share what they learn about AI with their colleagues, and that teams who want to ask for more headcount and resources must demonstrate why AI cannot do what they need done.

“What we need to succeed is our collective sum total skill and ambition at applying our craft, multiplied by AI, for the benefit of our merchants,” Lütke wrote in the memo.

Eighty-two percent of workers across several industries who use generative AI (GenAI) at least weekly agree that it can increase productivity, according to the PYMNTS Intelligence report, “Workers Say Fears About GenAI Taking Their Jobs Is Overblown.”

The report also found that 50% of those who use GenAI weekly worry that the technology could eventually eliminate their specific job, compared to 24% of those who are unfamiliar with it.

The post Shopify CEO Tobias Lütke: Employees Must Learn to Use AI Effectively appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/shopify-ceo-tobias-lutke-employees-must-learn-to-use-ai-effectively/feed/ 2 2575539
AI Explained: What’s a Small Language Model and How Can Business Use It? https://www.pymnts.com/artificial-intelligence-2/2025/ai-explained-whats-a-small-language-model-and-how-can-business-use-it/ https://www.pymnts.com/artificial-intelligence-2/2025/ai-explained-whats-a-small-language-model-and-how-can-business-use-it/#comments Mon, 07 Apr 2025 20:19:12 +0000 https://www.pymnts.com/?p=2570537 Artificial intelligence (AI) is now a household word, thanks to the popularity of large language models like ChatGPT. These large models are trained on the whole internet and often have hundreds of billions of parameters — settings inside the model that help it guess what word comes next in a sequence. The more parameters, the […]

The post AI Explained: What’s a Small Language Model and How Can Business Use It? appeared first on PYMNTS.com.

]]>
Artificial intelligence (AI) is now a household word, thanks to the popularity of large language models like ChatGPT. These large models are trained on the whole internet and often have hundreds of billions of parameters — settings inside the model that help it guess what word comes next in a sequence. The more parameters, the more sophisticated the model.

A small language model (SLM) is a scaled-down version of a large-language model (LLM). It doesn’t have as many parameters, but users may not need the extra power depending on the task at hand. As an analogy, people don’t need a supercomputer to do basic word processing. They just need a regular PC.

But while SLMs are smaller in size, they can still be powerful. In many cases, per IMB data, they are faster, cheaper and offer more control — key for companies looking to deploy powerful AI into their operations without breaking the bank.

Language models can have even trillions of parameters, such as OpenAI’s GPT-4. In contrast, small language models typically have between a few million and a few billion parameters.

According to a January 2025 paper by Amazon researchers, SLMs in the range of 1 billion to 8 billion parameters performed just as well or even outperformed large models.

For example, SLMs can outperform LLMs in certain domains because they are trained on specific industries. But LLMs do better in general knowledge.

SLMs also require far less computing power. They can be deployed on PCs, mobile devices or in company servers instead of the cloud. This makes them faster, cheaper and easier to fine-tune for specific business needs.

See also: AI Explained: What Is a Large Language Model and Why Should Businesses Care?

Advantages and Disadvantages of SLMs

Small language models are quickly becoming popular among businesses that want the benefits of AI without the steep cost and complexity of LLMs.

The following are advantages of SLMs over LLMs:

  • Cost efficiency: Large language models are expensive to run, especially at scale. Small models, on the other hand, can operate on personal computers or devices like smartphones and IoT sensors. Using SLMs along with LLMs for more critical and complex tasks can keep AI costs down.
  • Data privacy and control: When using an LLM, which means sending data to the cloud, there is always a privacy concern. Small models can be deployed entirely on premises, meaning companies retain full control over their data and workflows. This is especially important in regulated industries like finance and healthcare.
  • Speed and responsiveness: Because they are lighter, small models deliver responses more quickly and can operate with less latency. This is particularly valuable in real-time settings such as customer service chatbots.

“Lower data and training requirements for SLMs can translate to fast turnaround times and expedited ROI,” according to Intel.

Disadvantages of SLMs:

  • Bias learned from LLMs: Since smaller models are truncated versions of large models, bias in the parent model can be passed on.
  • Lower performance on complex tasks: Since they’re not as robust as the large models, they might be less proficient in complicated tasks that require knowledge in a comprehensive range of topics.
  • Not great at general tasks: SLMs tend to be more specialized so they are not as good as LLMs in general tasks.

As for hallucinations, since SLMs are built on smaller, more focused datasets, they’re well suited for use in applications by industry. As such, “training on a dataset that’s built for a specific industry, field or company helps SLMs develop a deep and nuanced understanding that can lower the risk of erroneous outputs,” according to Intel.

Read more: How AI Is Different From Web3, Blockchain and Crypto

Meta’s Llama Leads by a Mile

The most popular SLMs in the last two years “by far” have been those in Meta’s open-source Llama 2 and 3 families, according to the Amazon research paper.

Llama 3 comes in 8 billion, 70 billion and 405 billion parameter models while Llama 2 has 7 billion, 13 billion, 34 billion and 70 billion versions. The SLMs would be the 8 billion model from Llama 3 and the 7 and 13 billion model from Llama 2. (Meta just released Llama 4 this week.)

New entrant DeepSeek R1-1.5B offers 1.5 billion parameters as the first reasoning model from the Chinese AI startup.

Other SLMs include Google’s Gemini Nano (1.8 billion and 3.25 billion parameter versions) and its Gemma family of open-source models. Last month, Google unveiled Gemma 3, which comes in 1, 4, 12 billion and 27 billion parameters.

Last October, French AI startup and OpenAI rival Mistral unveiled a new family of SLMs: Ministraux, at 3 and 8 billion parameters. Its first SLM is Mistral 7B, which has 7 billion parameters.

Another notable SLM is Phi-2 from Microsoft. Despite only being 2.7 billion parameters, Phi-2 performs well in math, code, and reasoning tasks. It was trained using a carefully curated dataset, proving that smarter data selection can make even very small models capable.

Code repository Hugging Face has hundreds of open-source SLMs available for companies to use.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post AI Explained: What’s a Small Language Model and How Can Business Use It? appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/artificial-intelligence-2/2025/ai-explained-whats-a-small-language-model-and-how-can-business-use-it/feed/ 1 2570537