A Simple Guide to Everything You Need to Know About AI ArtYou can’t go around any metaphorical corner these days without stumbling upon a piece of AI-generated art or two. Even big companies like Adobe and Getty have been forced to embrace AI to stay competitive. But for many, it’s all still a mystery.
How exactly does it all work?
Back in 2022, the first mainstream AI application to rock the world was the creation of images using AI and text prompts. From those humble beginnings, the market has now grown into a massive empire, from hobbyist tools running on home computers to enterprise-grade design products used everywhere from Hollywood to Paris fashion houses. And that growth shows no signs of slowing down anytime soon. According to Market.us, the generative AI art market is expected to be worth around $40 billion by 2033, up from $3.2 billion in 2023. That’s an astonishing growth rate by any standard.
But how exactly is AI able to create beautiful artwork or photorealistic photos simply by typing text into a chat box?
Behind the Scenes of Generative AI First, let’s dispel a huge myth. Generative AI art models don’t store billions of photos or artworks from around the world in a giant database. This is simply wrong. Instead, these systems are trained on a global library of images, just like an art student learns about the works of Van Gogh, Warhol, or Rembrandt in detail during their studies.
This training gives the model a basic understanding of how different art forms are created, such as light, brushstrokes, perspective, and so on. This of course applies to photos as well as other aspects of design and imagery. This training is the basis for it to generate images on demand. It does this through a process called “diffusion.” Given a photo of a house, the AI model breaks it down into millions of data points that represent the edges, curves, textures, and everything else that makes up the image. These digital bits are stored in giant data centers for later use.
When a user later launches the AI app and types in “draw me a house,” the system searches its massive database and reconstructs the image of a house that it understands based on the training results. The reconstruction process is as follows: starting with a completely cluttered image, it keeps removing noise until it ends up with an image.
This is not too dissimilar to the process by which a sculptor reveals the final sculpture by chiseling at a block of marble until their idea emerges. Much of the public confusion about AI “stealing art” stems from a misunderstanding of how this process works. An AI can reproduce any style it’s trained on, just as an art student can reproduce a Warhol piece by following its rules of color, composition, and perspective.
But just like the student, an AI image generator almost always uses all the knowledge it’s trained on to create innovative, original images.
While it can create works that mimic existing art, the law is clear that you can’t copyright a style, only the exact work that the artist created.
These questions and more are now being worked through legal systems around the world as we all begin to realize that AI systems can replicate almost anything in an instant.
What are the bestAI art generators right now? GPT-Image-1
The undisputed champion of all image generation systems right now is OpenAI’s GPT-Image-1 offering. Launched in late April of this year, this amazing AI model combines two absolutely amazing features. First, it exhibits incredibly fast execution, meaning it will output exactly what you type into the text box.
The second, and perhaps more important, feature is impeccable text handling. Until now, reproducing text in AI images has been very difficult (and, in fact, even more unpredictable). GPT-Image-1 has revolutionized this dilemma, with an astounding 99% hit rate for accurately generating the desired text.
On the Journey
Midjourney is a veteran in the field of AI image generators, and one of the first to consistently deliver excellent image quality. Such consistency made it almost an industry standard for many art and design professionals.
However, over time, it has lost its influence, although a recent upgrade to version 7 has firmly put it back on the list of all-time greats. Freepik
Freepik is one of the relatively newcomers in the field of AI image generation, and I personally love it. It is based on the powerful Flux AI image model and does an excellent job of meeting your image and text needs.
In a recent test I conducted for Tom’s Guide, it surprisingly beat a heavyweight competitor. The Mystic 2.5 flexible model in the service performed very well and produced some stunning images on demand.
The benchmark image generation is supported by a range of powerful and useful tools, including best-in-class editing features and an excellent product mockup generator. The sketch function also provides an innovative way to customize your image creation vision in real time.
Check out our constantly updated list of the top AI image generators for more great services.
Can I make AI art for free? Not all AI image generators cost money. There are now a growing number of free and open source alternatives that you can use on your home computer without signing up for an online cloud service.
The price you pay is that they usually require a fairly powerful computer to run and require some technical knowledge to install and run. Here are a few of my favorite options.
Krita
This free and open source art tool was born more than 20 years ago as an alternative to commercial products such as Adobe Photoshop and Corel Painter. It has dominated the powerful artist’s toolbox ever since. Recently, it added AI capabilities through a sophisticated plugin that is increasingly being integrated into the main product.
Krita’s appeal lies in its rich editing capabilities and intuitive AI features. It can also use local AI models stored on your computer, which is undoubtedly a boon for people who like to create on the go.
Pinokio
One of the best ways to experience a variety of free and open source image generation tools is to download and install Pinokio. This easy-to-use product brings together some of the world’s best open source AI products, including various image tools such as Stable Diffusion, Fooocus, Forge, and PhotoMaker2.
The best part? All the tools install in one click. As long as you have enough processing power and storage space, you can start using it.
Category: news
Train your brain for creative work with Gen AI
There are countless articles on how to use generative artificial intelligence (gen AI) to improve work, automate repetitive tasks, summarize meetings and client interactions, and synthesize information. There are also vast virtual libraries filled with tips and guides that can help us achieve more effective or even superior output with gen AI tools. Many common digital tools already come with integrated AIco-pilots that automatically enhance and complete writing, coding, designing, creating, and whatever you’re working on. But generative AI does more than just enhance or accelerate what we already do. With the right mindset shift, we can train our brains to creatively rethink how to use these tools to unlock entirely new value and achieve exponential results in an AI-first world.
Generative AI relies on natural language processing (NLP) to understand requests and generate relevant results. It is basically pattern recognition and pattern assembly based on instructions to provide output that accomplishes the task at hand. This approach fits with our brain’s default mode: pattern recognition and the pursuit of efficiency, which favors short, direct prompts for immediate, predictable results.
If most people use AI in this way, no matter how powerful these tools are, we will inadvertently create a new status quo in the way we work and create. Training our brains to challenge our thinking, our assumptions about AI’s capabilities, and our expectations for predictable results starts with a mindset shift to recognize that AI is not just a tool, but a partner in innovation and exploration of unknown territory.
Rethinking Collaboration with AI for More Creative and Innovative OutcomesChanging your mindset to collaborate with AI in a more creative and open way means being willing to explore unknown territory and having the ability to learn, unlearn, and experiment. Plus, it’s fun.
Insight Center SeriesCollaboration with AIHow humans and machines can best work together.
I often say that I maximize the potential of AI and achieve the best results when I put aside my cognitive biases. With a smile on my face, I ask myself, “WWAID?” or “What would AI do?” I acknowledge that the way I unconsciously use AI tools may default to predictable inputs and outputs. But by asking WWAID, I open myself up to new interactions and experiences that may yield unexpected results.
Tapping into AI’s creative and transformative potential, and training your brain for an AI-first world, requires us to shift our prompting approach to thinking of AI as a partner, not just a tool.
12 Exercises to Train Your Brain to Work More Creatively with AI
Here are a dozen ways to train our brains to achieve broader, more innovative outcomes with AI:
1. Practice “exploratory prompts” every day
Start each day with an open-ended prompt that pushes you to think boldly. Try asking yourself, “What trends or opportunities are there in my industry that I don’t see coming?” or “How can I completely redefine my approach to key challenges?”
2. Create prompts around “what if” and “how can we” questionsInstead of asking direct questions, ask open-ended possibilities. For example, instead of asking “How can I be more efficient?”, try asking “If I could be more efficient in an unconventional way, what would that look like?”
3. Embrace ambiguity and curiosity in promptsBy training ourselves to prompt without a clear endpoint, AI can generate answers that may surprise us. Prompts like “What might I have overlooked in approaching X?” can open doors to insights we never considered.
4. Use prompts to explore rather than solve problemsMany prompts focus on solutions. Shifting to exploration can yield deeper insights. For example, “Let’s explore what the future of leadership would look like if AI had a seat at the board or C-suite — how would our jobs, roles, and corporate culture change?”
5. Chain prompts to develop ideas iterativelyDon’t stop at the first answer, ask follow-up questions that make the answer more complex and visionary. If the AI comes up with an idea, build on it with questions like “What will it look like in 5 years?” or “How could this approach change the way the company operates in the future?”
“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.”
Meta AI searches made public – but do all its users realise?
How would you feel if your internet search history was put online for others to see?
That may be happening to some users of Meta AI without them realising, as people’s prompts to the artificial intelligence tool – and the results – are posted on a public feed.
One internet safety expert said it was “a huge user experience and security problem” as some posts are easily traceable to social media accounts.
This means some people may be unwittingly telling the world about their searches – such as asking the AI to generate scantily-clad characters or help them cheat on tests.
Meta says chats are private by default, and if users make a post public they can choose to withdraw it later.
Before a post is shared, a message pops up which says: “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.”
However – given the private nature of some of the queries – it is not clear if the users understand their searches are being posted into a public “Discover” feed on the Meta AI app and website, and that these could be traced to their other social accounts through usernames and profile pictures.
The BBC found several examples of people uploading photos of school or university test questions, and asking Meta AI for answers.
One of the chats is titled “Generative AI tackles math problems with ease”.
Another user’s conversation which was posted publicly was about them exploring questions around their gender and whether they should transition.
There were also searches for women and anthropomorphic animal characters wearing very little clothing.
One search, which could be traced back to a person’s Instagram account because of their username and profile picture, asked Meta AI to generate an image of an animated character lying outside wearing only underwear.
‘You’re in control’Meta AI, launched earlier this year, can be accessed through its social media platforms Facebook, Instagram and Whatsapp.
It is also available as a standalone product which has a public “Discover” feed.
Users can opt to make their searches private in their account settings.
Meta AI is currently available in the UK through a browser, while in the US it can be used through an app.
In a press release from April which announced Meta AI, the company said there would be “a Discover feed, a place to share and explore how others are using AI”.
“You’re in control: nothing is shared to your feed unless you choose to post it,” it said.
But Rachel Tobac, chief executive of US cyber security company Social Proof Security, posted on X saying: “If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.”
She added that people do not expect their AI chatbot interactions to be made public on a feed normally associated with social media.
“Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked,” she said.
Analysis of DJI drone battery technology: How AI reshapes the intelligent flight experience
The Intelligent Revolution of Drone Batteries
With the rapid development of drone technology, batteries have evolved from simple energy supply units to “intelligent power centers” integrated with AI algorithms. The flagship models represented by DJI Mavic 3 Pro use the DJI BP03 intelligent battery to achieve a comprehensive upgrade in battery life prediction, health management and safety protection through artificial intelligence technology. This article will deeply analyze how AI empowers drone batteries and explore future technology trends.
1. AI-driven battery management system (BMS)
1. Dynamic battery life prediction algorithm
Traditional batteries only display a rough percentage of power, while the AI system of Mavic 3 Pro analyzes in real time: flight attitude data (wind speed, climbing resistance), environmental parameters (temperature, altitude)
Load status (gimbal power consumption, image transmission strength)
Combined with historical data models, it can accurately predict the remaining flight time (error <3 minutes). For example: when it detects flying against the wind, the system will automatically lower the battery life display and recommend returning home.
2. Adaptive charging and discharging strategy
Intelligent charging sorting: Charging manager (Battery Hub 03) uses AI to prioritize charging the battery with the lowest power, increasing efficiency by 20%
Cycle life optimization: AI will learn user habits and automatically maintain the battery at 50% power (the best storage state for lithium batteries) during the non-shooting season
3. Fault prediction system
By monitoring:
Battery cell voltage difference (warning when >0.1V), internal resistance change trend, abnormal charging rate
Prompt battery attenuation risk 3-5 cycles in advance to avoid power outage accidents during flight
2. Deep integration of AI and battery safety
1. 3D temperature field modeling
The battery has 5 built-in temperature sensors, and AIconstructs a 3D thermal map:
Normal working conditions: uniform heat dissipation design
Extreme environment: automatic output power limit (such as 10% power reduction at 40°C high temperature)
3. AI-enabled user interaction experience
1. Voice assistant linkage
Through the “DJI Assistant” APP, you can achieve:
“Check battery health” → Display cycle number/maximum capacity
“Plan multi-battery charging” → Automatically calculate the optimal charging sequence
3. Smart maintenance reminder
Automatically push based on frequency of use:
“You have a battery that has been idle for more than 30 days, it is recommended to charge to 60%”
“The current cycle number has reached 150 times, it is recommended to enable the backup battery” AI fast charging optimization
Through neural network learning battery chemical characteristics, complete 0-80% charging in 15 minutes (currently takes 40 minutes).
Wireless charging collaboration
UAVs and charging piles automatically adjust coil positions (error <1mm) through AI negotiation to dynamically match transmission power (up to 500W)
Hydrogen-electric hybrid system
AI manages the coordinated energy supply of fuel cells and lithium batteries:
Cruise phase: hydrogen fuel cells are the main
Maneuvering phase: lithium batteries instantly replenish power
Conclusion: Redefine energy management
DJI has not only solved the “range anxiety” and “safety concerns” of traditional drones by deeply embedding AI into the battery system, but also created a new paradigm for intelligent energy management. With the evolution of machine learning algorithms, drone batteries in the future may have self-diagnosis and self-repair capabilities, further unleashing creative potential. For professional users, understanding these AI features will directly improve flight safety and shooting efficiency.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.”
The future of artificial intelligence and mathematics
Last year, large language models were shown to be effective at solving math problems at the high school level and above. “Math is the source of great impact, but it’s done in much the same way that people have done for centuries standing in front of a blackboard,” said Patrick Shafto, DARPA program manager, in a video introducing the program. “The modern world is built on math. Math allows us to model complex systems, such as the way air flows around airplanes, the way financial markets fluctuate, and the way blood flows through the heart. Breakthroughs in advanced math can unlock new technologies, such as cryptography, which is crucial for private messaging and online banking, and data compression, which allows us to take images and video on the internet.
But advances in math can take years. DARPA wants to speed up the process. The goal of expMath is to encourage mathematicians and AI researchers to develop what DARPA calls “AI co-authors” tools that can break down large, complex math problems into smaller, simpler problems that are easier to understand and therefore faster to solve.
For decades, mathematicians have used computers to speed up calculations or to check the correctness of certain mathematical statements. The new hope is that AI might help them crack previously unsolvable problems.
But there’s a big difference betweenAI that can solve high school math problems (which the latest generation of models have mastered) and AI that can (in theory) solve problems that professional mathematicians spend their careers trying to solve.
On the one hand, these tools can automate some of the tasks that math graduates do; on the other, they can push human knowledge beyond existing limits.
Here are three ways to think about this gap.
AI needs more than just clever tricksLarge language models aren’t good at math. They make things up and can be convinced that “2 + 2 = 5.” But newer versions of the technology, especially so-called large reasoning models (LRMs) like OpenAI’s o3 and Anthropic’s Claude 4 Thinking, are far more capable than ever before—and that’s got mathematicians excited.
This year, many LRMs scored high on the American Invitational Mathematics Exam (AIME), an exam for the top 5% of high school math students in the United States. LRMs try to solve problems step by step, rather than just giving the first answer.
Meanwhile, new hybrid models that combine a master’s degree in law (LLM) with some kind of fact-checking system have also made breakthroughs. Emily de Oliveira Santos, a mathematician at the University of São Paulo in Brazil, points to Google DeepMind’s AlphaProof system, which combines an LLM with DeepMind’s game-playing model AlphaZero, as an important milestone. Last year, AlphaProof became the first computer program to reach the level of a silver medalist in the International Mathematical Olympiad, one of the world’s most prestigious math competitions.
In May, Google DeepMind’s model AlphaEvolve achieved better results than anything humans have ever achieved on more than 50 unsolved math problems and several real-world computer science problems.
The momentum of progress is palpable. “GPT-4’s math capabilities are nowhere near undergraduate level,” de Oliveira Santos says. “I remember when it was released, I tested it with a topology problem, and it couldn’t get past a few lines before it completely froze.” But when she gave the same problem to OpenAI’s o1, an LRM released in January, it succeeded. Is AI closing in on human mathematicians?
AI Search: Beyond Information, Towards Intelligence
AI -powered search makes it easier to ask Google any question and get helpful answers with web links. That’s why AI Overview has been one of the most successful features in Google Search over the past decade. As users use AI Overview, we’ve seen them become more satisfied with their search results and search more often. AI Overview has driven more than 10% growth in usage of queries showing AI Overview on Google. 1
AI Mode in Search, delivering cutting-edge AI capabilitiesSince launching AI Overview, we’ve heard from our power users that they want an end-to-end AI search experience. So we started testing AI Mode in Search in Google Labs earlier this year, and today we’re launching it in the US—without signing up for Labs. AI Mode is our most powerful AI search feature yet, with more advanced reasoning and multimodal analysis, and the ability to explore deeper with follow-up questions and helpful web links. In the coming weeks, you’ll see a new AI Mode tab in the search bar on Google Search and the Google app. AI mode uses our query fan-out technology underneath to break your question into subtopics and issue multiple queries for you at once. This allows Google Search to explore the web more deeply than traditional Google Search, helping you discover more web resources and find high-quality, relevant content that matches your question.
AI mode is our first introduction to cutting-edge features in Gemini and a preview of where we’re headed. As we gather feedback, we’ll gradually incorporate many features in AI mode into the core search experience. Starting this week, we’re rolling out AI mode and a custom version of Gemini 2.5 (our smartest model) in the US.
We’re also showcasing new, advanced features coming soon to AI mode, which will be available first in Labs over the next few months to gather feedback and input from advanced users. Read on to learn more.
Deep search in AI mode to help you researchFor questions that require more comprehensive answers, we’re introducing a deeper research feature in AI mode called Deep Search. Deep Search uses the same query fan-out technology, but takes it to the next level. It can launch hundreds of searches in minutes, reason about different pieces of information, and create professional, complete citation reports, saving you hours of research time.
Live Search ModeWe continue to push the boundaries of visual search with Google Lens, which more than 1.5 billion people use every month to search for what they see. Now we’re taking another step in multimodal search by bringing the real-time capabilities of Project Astra to Google Search. With Google Search Live, you can interact with Google Search in real time using your camera to discuss what you’re seeing. Agents that do your work for youYou often use Google Search to get work done, and now we’re bringing the agent capabilities of Project Mariner to AI Mode to help you save time on tasks like buying tickets. Simply type “Find two affordable tickets to the Reds Lower Deck game this Saturday” and AI Mode will launch a query fan-out, analyzing hundreds of potential ticket options across websites, including real-time prices and availability, and taking care of the tedious form filling. AI mode shows you ticket options that match your exact criteria, and you can choose to complete your purchase on your favorite sites—saving you time while keeping you in control. A new AI shopping companion to help you find the perfect itemThe new AI shopping experience combines the powerful Gemini model with our shopping graph to help you browse items, find inspiration, think about considerations, and narrow down your product choices. If you want to see how something looks on you, just upload a photo of yourself to virtually try on billions of outfits. Once you find something you like, you can ask our new proxy checkout feature to purchase it on your behalf using Google Pay, with your guidance and supervision if the price is right. Read our shopping article to learn more.
Context in AI mode to get customized resultsTo provide a more personalized experience, AI mode will soon provide personalized suggestions based on your search history. You can also choose to connect other Google apps, starting with Gmail, to get more context about you.
You’ll see when AI mode is using your context to help. You’re always in control and can choose to connect or disconnect at any time.
Custom charts and graphs to visualize your dataWhen you need additional data processing or visualization help, AI mode can analyze complex data sets and create vivid charts, all tailored to your query. For example, you want to compare the home field advantage of two different baseball teams. Search will use Google’s real-time sports information for analysis and generate interactive charts to answer your specific question.
Anthropic says its new AI model can work almost an entire workday straight
Artificial intelligence startup Anthropic says its new AI model can work for nearly seven hours in a row, in another sign that AI could soon handle full shifts of work now done by humans.
The new model could bring AI one step closer to replacing jobs as tech giants race to build ever more powerful artificial intelligence. It could represent a shift in how people use AI at work, moving from asking digital agents to accomplish individual tasks to giving these tools a broader objective – similar to the way one might instruct an employee or coworker.
Anthropic, which counts Amazon and Google among its backers, introduced its new Opus 4 model on Thursday, along with another model for general use named Claude Sonnet 4.
The new model can handle larger, more complex projects, for about a full workday operating independently, without additional prompts from a human. While that’s not exactly a one-to-one match for the kind of workday a human might have, switching among projects or tasks, it’s still a significant advance.
Anthropic says the assistant could be a tool for automating mundane aspects of the workday rather than eliminating roles.
“It’s like the kind of thing that is challenging that, you know, might represent 30% of your day, that isn’t necessarily fulfilling or professionally expanding you, but is necessary in the pursuit of being successful in your job,” said Scott White, Anthropic’s product lead for the company’sAI assistant Claude.ai.
White provided an example of a marketer who wants to analyze previous performance to develop a new advertising strategy. Claude Opus 4 would be able to analyze the current strategy, look through the company’s Facebook and Google ads to assess their performance, notice the difference between the two campaigns and then offer suggestions as to why they may have performed differently.
“It’s basically the ability for Claude to think and reason deeply over a long period of time about your goal, while also using a set of tools with its reasoning capabilities to look at problems from new angles and continue moving the task forward,” he said.
Anthropic’s model arrives as more companies are investing inAI A survey from venture capital firm Menlo Ventures – also an Anthropic investor – found that enterprise spending on generative AI, the type of AI that can create content and powers services like ChatGPT and Claude, grew sixfold in 2024 compared to 2023. That report also indicated Anthropic doubled its reach, eating into OpenAI’s dominant position in the market for AI’s business services. McKinsey & Company reported that 92% of companies plan to increase investments in generative AI over the next three years.
And Anthropic is far from the only one looking to cash in. Google on Tuesday announced that its autonomous coding tool, Jules, will be available to the public, while Microsoft introduced a more advanced coding assistant for its Github development platform on Monday. Apple is reportedly working with Anthropic on a new tool that can write and test code, according to Bloomberg.
At the same time, experts increasingly say AI could lead to job losses. The World Economic Forum’s Future of Jobs report, released earlier this year, found that 41% of employers plan to downsize as generative AI plays a bigger role in work-related tasks. Aneesh Raman, the chief economic opportunity officer at LinkedIn, recently fretted about AI replacing some entry level jobs in a New York Times op-ed.
Anthropic’s White thinks AI will make it easier for people to grow their careers outside their formal educations, such as an engineer using AI to design a visual mockup without any design training. However, he acknowledged the need to address the issue of AI’s impact on the workforce.
“It’s not also something that only Anthropic can take a perspective on,” he said. “This is something that the government, policy makers, many companies, need to work together to understand the arc of how this is going to be implemented.
OpenAI launches new features for WhatsApp users — here’s what’s new
OpenAI has expanded the functionality of ChatGPT in WhatsApp to include uploading pictures, sending voice messages, and linking existing ChatGPT accounts directly through the messaging platform.
These updates have been rolled out to all users globally, ensuring that individuals around the world can benefit from the enhanced functionality. Users can now upload pictures within WhatsApp conversations using ChatGPT, similar to the functionality available in the standalone ChatGPT app. Users can now use AI to analyze and respond to visual content. ChatGPT is now on WhatsApp — here’s how to message the AI chatbot for free
You can now call or text 1-800-ChatGPT for free — here’s everything you need to know
In addition, the integration of voice messages allows users to send voice notes to ChatGPT, which theAI will process and respond to in text form. While not exactly similar to ChatGPT Advanced Voice, this feature does provide a more natural interaction with ChatGPT within the platform, catering to a wider range of user preferences.
Account linking for expanded utility
In addition, users now also have the option to link their existing ChatGPT account (whether Free, Plus, or Pro) to WhatsApp. This integration is designed to provide a more seamless experience, allowing users to manage their interactions and usage more effectively. By linking their accounts, users can enjoy extended usage and personalization, thereby enhancing the overall usefulness of ChatGPT in WhatsApp.
How to Use the New Features To use these new features, users should ensure that their WhatsApp application is updated to the latest version. Once updated, interacting with ChatGPT is very simple. Start by saving the number 1-800-CHATGPT (1-800-242-8478) to your phone contacts, then open WhatsApp and chat with your saved contacts to start a conversation. In the chat, use the attachment icon to select an image and send it directly to ChatGPT. To voice chat with ChatGPT, press and hold the microphone icon to record and send a voice message. ChatGPT will process these voice notes and reply in text.
If you want to get the best response, follow the prompts in the chat to link your ChatGPT account. You need to enable the extended usage and personalization features.
Benefits of the Integration
There are several benefits to integrating ChatGPT into WhatsApp. Accessing the AI assistant directly within the platform is much more convenient as it eliminates the need to switch between apps.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.”
After Siri crisis, Apple joins AI data center race
While other big tech companies invested heavily in AI data centers, Apple (AAPL) chose to sit on the sidelines and avoid a surge in capital spending. But that seems to have changed, and Apple realized it needed to get in on theAI data center game.
Apple is ordering about $1 billion worth of Nvidia (NVDA) GB300 NVL72 systems, Loop Capital analyst Ananda Baruah said late Monday. That equates to about 250 servers, each worth $3.7 million to $4 million, he said in a client note. Apple is working with server makers Dell Technologies (DELL) and Super Micro Computer (SMCI) to develop large server clusters to support generative AI applications.
“AAPL is officially in the large server cluster GenAI game… and SMCI and DELL are the primary server partners,” he said. “While we are still gathering more complete information, it seems likely that this is a Gen AI LLM (large language model) cluster.”
Barua believes that Apple’s change of strategy is due to the trouble it has encountered in bringing its AI-powered Siri digital assistant to market. Apple has delayed the release of the new Siri indefinitely. The company had hoped to launch the AI features earlier this year, after previewing them at the Worldwide Developers Conference last June. Apple has reportedly reorganized its executive team to deal with the company’s difficulties in releasing AI features. According to Bloomberg, an executive called the delays and missteps “unpleasant” and “embarrassing” because the company has been promoting the AI features in TV ads.
Introducing Meta AI App: A new way to access your AI assistant
Meta AI is designed to understand your needs, so its answers are more helpful. It’s easy to communicate, so interactions are more fluid and natural. It’s more social, so it can show you the people and places you care about. You can also use Meta AI’s voice capabilities while multitasking and doing other things on your device, and there’s a visible icon to remind you when the microphone is in use.
Hey, Meta, let’s chatWhile talking to AI is nothing new, we’ve improved the underlying model in Llama 4 to give you more personalized, relevant, and friendly responses. In addition, the app has integrated other Meta AI features, such as image generation and editing, and now you can do all of this with your AI assistant through voice or text conversations.
We also have a voice demo based on full-duplex voice technology that you can turn on or off for testing. This technology will provide a more natural voice experience and is trained for conversation, so the AI can generate speech directly without reading written responses. It can’t access web pages or real-time information, but we hope to give users a glimpse of the future development direction through first-hand experience. You may encounter technical problems or inconsistencies, so we are constantly collecting feedback to help us continuously improve the experience. Voice conversations, including a full-duplex demo, are available now in the US, Canada, Australia, and New Zealand. For more information on how to manage your experience on the Meta AI app and how to switch modes, visit our Help Center.
Meta AI uses Llama 4 to help you solve problems, answer everyday questions, and better understand the world around you. It features web search to help you get recommendations, dig deeper into a topic, and stay connected with friends and family. If you just want to try it out, we’ve got some conversation starters to spark your search.
We’ve spent decades personalizing the user experience on our platform, and we’ve applied that philosophy to Meta AI to make it even more personal. You can askMeta AI to remember certain things about you (like that you like traveling and learning new languages), and it can pull out important information based on context. Your Meta AI assistant also uses the information you share across Meta products (like your profile and what you like or interact with) to provide more relevant answers to your questions. Personalized replies are available now in the US and Canada. If you’ve added your Facebook and Instagram accounts to the same Accounts Center, Meta AI can also use information from both accounts to give you a more powerful, personalized experience. Existing Meta View users can continue to manage their AI glasses through the Meta AI app—all paired devices, settings, and media will automatically move to the new Devices tab once the app is updated. From AI glasses to desktopThe web version of Meta AI has also been upgraded. It features voice interaction and a new Discover push, just like what you see in the app. This continuity between the Meta AI app, AI glasses, and the web helps deliver more personalized AI, so you can get the services you need, when and where you want.
You control your experienceVoice is the most intuitive way to interact with Meta AI, and the Meta AI app is designed to help you easily start a conversation with the touch of a button—even if you’re multitasking or busy. If you want voice to be on by default, you can control the “Ready to talk” feature in settings.