top of page

At Google, the customer is the hero of every story they tell, whether they’re ghostwritten first-person accounts or reportage. The challenge becomes how to showcase Google’s transformational technology while telling compelling stories about clients who are themselves doing important, transformational work, all while hewing to Google’s strict format guidelines and word limits. I begin with carefully structured interviews—supplemented with old-fashioned gumshoe research—that yield quotes suitable for video clips or blog posts.

At Google, the customer is the hero of every story they tell, whether they’re ghostwritten first-person accounts or reportage. The challenge becomes how to showcase Google’s transformational technology while telling compelling stories about clients who are themselves doing important, transformational work, all while hewing to Google’s strict format guidelines and word limits. I begin with carefully structured interviews—supplemented with old-fashioned gumshoe research—that yield quotes suitable for video clips or blog posts.

Customer Stories & AI

Google puts a premium on good storytelling, and the customer is the hero of every story they tell. The challenge becomes how to showcase Google’s complex and transformational technologies while telling compelling stories about clients who are themselves doing important, transformational work, all while hewing to Google’s strict format and SEO guidelines, content requirements, and word limits.

 

I begin with carefully structured interviews that yield quotes suitable for video clips or blog posts. Find a hook, do some old-fashioned gumshoe research, corral some compelling before-and-after stats . . . and an engaging narrative begins to take shape.

(And check out these customer stories I’ve written for GrowthLoop and Okta.)

Customer Stories & AI hero.png

Writing for Google requires fluent knowledge of its own wide range of AI technologies as well as the ability to quickly learn the unique AI technologies of Google’s customers, conduct interviews with those companies’ founders and  technologists, and write about their products adroitly. Many of the following customer stories showcase that ability. (I’ve written about AI for other companies as well.)

AssemblyAI: Building complex speech-to-text models faster by upscaling to Google Cloud and Vertex AI

At AssemblyAI, there was no lack of ingenuity—just a lack of computing resources. 

The artificial intelligence company builds speech-to-text and speech understanding models, accessible through their API, to help product and development teams turn speech into meaning. Going far beyond speech-to-text transcription, AssemblyAI solves complex problems in audio intelligence like summarization, personally identifiable information redaction, and sentiment analysis. 

"We’re democratizing access to AI-based speech understanding and unlocking new use cases for conversational intelligence by training and packaging speech-to-text models," says Ahmed Etefy, the lead engineer on AssemblyAI’s data infrastructure team in Vienna.

But when AssemblyAI’s growing research team, which Etefy's team supports, embarked on their next generation of AI speech models, they hit a wall that they literally could not scale.

 

All of their model training, and much of their data storage, was handled on-premises. That worked for their initial model release, which was trained on 1.1 million hours of audio stored in hundreds of terabytes of data. By comparison, their new models would be trained on 12.5 million hours of audio—requiring storage capacity and processing power 10 times larger—and, understanding the exploratory nature of AI model development, AssemblyAI's research team would need to run many more experiments simultaneously. 

While on-prem training and data storage is expensive, the most serious problem was the digital traffic jam.  Continue reading on the Google Cloud Blog >>

AssemblyAI _Speech is more than what you say.png
AssemblyAI _It's what you mean.png
Ahmed Etefy _1.jpg

After interviewing the founder of AI startup Perennial, I was asked to create twin stories: a reportage-style case study and a ghostwritten first-person blog post. While the two pieces use the same source material, I varied the tone, style, and content to engage different audiences while remaining true to the voice of the interviewee/“author.“  (You can find other ghostwritten blog posts here.)

Perennial _farmer at sunset
Google Maps 1.png
Soil diagram.png
Perennial: Harnessing geospatial soil data for sustainable agriculture and climate resilience

One day David Schurman was looking for life on Mars when he thought of a way to combat climate change here on Earth. Shortly thereafter he co-founded Perennial, whose AI-enabled, Google Cloud–based datasets on soil health enable climate-smart agriculture at a global scale.

A native of Denver, Colorado, Schurman grew up in the shadow of the National Oceanic and Atmospheric Administration (NOAA) and the National Center for Atmospheric Research (NCAR) located in nearby Boulder, Colorado, and spent his high school summers interning at these world-famous scientific research institutes.

“I was working in an atmosphere lab at NOAA, analyzing air samples,” he recalls, “when we crossed the threshold of 400 parts per million of carbon dioxide in the atmosphere”—a key indicator of the seriousness of global warming. The experience was both seminal and visceral. “I knew then that I wanted to help find a way to heal the planet,” Schurman says.  Continue reading on the Google Cloud Blog >>

Combating climate change by mapping global soil carbon levels with Google Earth Engine

by David Schurmann

 

Four hundred parts per million.

Climate scientists had been warning us for years: Breaching that threshold of atmospheric carbon meant the planet was on the path toward overheating. And I remember exactly where I was when it happened.

As an intern at the National Oceanic and Atmospheric Administration, I was analyzing air samples in an atmosphere lab in Colorado when the news came through from the Mauna Loa Observatory, a sister institution in Hawaii where the 400 ppm carbon level was first measured. It was then that I knew I wanted to help find a way to heal the planet.

But the more research I did, the more frustrated I became. Human beings generate 50 gigatons of carbon emissions every year. I was looking for a climate remediation solution that was both fast and scalable. But the geoengineering techniques (we called them “planet hacks”) I was studying—from light-reflecting aerosols to carbon-sucking machines—either came with dangerous side effects, like drying up the monsoons on which billions rely for water, or would take too long to start making a difference.  Continue reading on the Google Maps Platform Blog >>

Perennial blog post
Napster Corp..png
Napster Corp. _2.png
Infinite Reality Karina Kogan.png
Napster Corp.: Powering immersive, 3D ecommerce with compute, storage, and AI in Google Cloud

When Karina Kogan describes the culture at Infinite Reality, she uses adjectives like “audacious,“ “tenacious,“ and “fearless.“ And her words are well-chosen, because her company's mission is nothing short of transformational. 

“The future of the internet is three-dimensional and conversational,“ she states. “We aim to be a key part of the power behind the immersive web, and you need those qualities when you're doing something hard that's never been done before.“

Kogan, Chief Marketing Officer of Napster Corp. and president of Napster Corp. Studio, a no-code platform for building and deploying immersive projects, acknowledges that the roots of three-dimensional computing are in gaming: “Think of all those Gen Zers and Gen Alphas playing Roblox, Minecraft, and Fortnite,“ she comments, rattling off the names of some of the most popular console and massive multiplayer online games. 

But it’s 3D use cases like ecommerce and enhanced data—enabled by advances in computing power, extended reality technologies, and AI—to which Napster Corp. aspires.

 

While Napster Corp. does have an enterprise product for large multinational clients, Kogan wants to make those 3D ecommerce environments accessible to all users, not just customers with big budgets. To that end, her team is currently beta testing a self-service SaaS platform that will enable anybody with an internet connection to build a 3D website. 

“Immersive technology requires a massive amount of storage and a mammoth content delivery network,“ Kogan explains, noting that the average file size for spatial rendering and streaming video is typically 10 to 100 times larger than an average web page. Indeed, live video chat uses data transfer and CPU bandwidth that scales quadratically with every user, and transcription and text-to-speech functionalities require enormous GPU usage.  Continue reading on the Google Coud Blog >>

Inworld: Enriching engagement with AI consumer applications for millions of simultaneous users

“Most artificial intelligence applications focus on augmenting productivity and efficiency at the office,” says Kylan Gibbs, “but we all spend time outside of work on things that give us joy, and that’s an underserved part of the AI ecosystem.” 

Gibbs is the co-founder and CEO of Inworld, a startup whose developer platform powers AI-driven consumer applications like massive multiplayer games, learning apps, and virtual assistants. With total venture capital funding of US $120 million, it’s one of the best-funded startups in consumer AI. 

Gibbs’ is a veteran of the Google DeepMind project, where he worked with the team building early large language models (LLMs) that ultimately became Gemini. It’s also where he met Inworld co-founders, Ilya Gelfenbeyn and Michael Ermolenko. 

Google and Gemini would go on to play a key role in Inworld’s future.

 

Inworld enables real-time online experiences. The platform offers an AI runtime — the infrastructure that supports the execution and deployment of AI models and applications—that’s scaled to handle tens of millions of concurrent users. It’s also lightning-fast. 

Longer lag times are standard for enterprise AI applications like research and writing. But when you’re playing a fast-moving first-person-shooter game that burns up millions of tokens per minute and generates thousands of queries per second, latency is the enemy. Inworld’s platform has shaved latency to just milliseconds.

To support speed at that scale, Inworld turned to Google Cloud, where it hosts its databases, builds its scalable AI runtime platform, and powers its customers’ AI-driven online experiences.

“The technology for building consumer AI applications at a massive scale has arrived, which makes this an ideal time to adapt AI for consumer experiences,” Gibbs observes. “Google Cloud’s cost-effective support for the vast capacity of our runtime needs will enable us to scale well into the future.”  Continue reading on the Google Cloud Blog >>

Inworld _1.png
Inworld _2.png
Inworld _3.png
Galileo _4.png
Galileo _3.png
Galileo _1.png
Galileo: De-risking LLMs and building trustworthy AI apps at scale with Gemini, Google Cloud, and NVIDIA

“It takes a village to launch an AI application,“ says Yash Sheth with a gleam in his eye, paraphrasing the proverb usually quoted in reference to raising children. But he's only half-joking.

While he may not think of them as his kids, Sheth cares deeply about large language models (LLMs) — so deeply that, together with Atindriyo Sanyal and Vikram Chatterji, he co-founded a company whose platform helps developers build the best AI applications possible (without a whole village getting in on the act).

A software engineer and data scientist who worked on speech recognition technology and conversational AI, Sheth saw firsthand how hard it was to ensure that LLMs got things right.

By their very nature, LLMs are unpredictable, or “non-deterministic,“ in Developer-ese: Their output can vary even if the inputs remain the same. This makes it difficult for engineers to “de-risk“ them, the process by which they ensure an LLM works correctly. 

But “getting it right“ involves a lot of rigorous, time-consuming experimentation and evaluation. And every stakeholder in the village, from the product manager to the subject matter expert, wants to make sure that the application meets their business requirements and accuracy expectations, as well as standards for safety, security, and compliance. 

So Galileo set about building a holistic “trust layer“ using what Sheth dubbed “evaluative intelligence“: the right metrics and infrastructure needed to measure holistically how AI applications are performing. 

Thus was born Galileo: a platform that helps developers build, ship, and scale reliable LLM-based generative and agentic AI applications by letting them easily observe and measure behavior, evaluate models against application-specific benchmarks, automatically surface insights, and quickly mitigate any problems, such as hallucination, so that the applications behave as intended.  Continue reading on the Google Cloud Blog >>

Radisson: Personalizing ads in multiple languages automatically with Vertex AI

Every moment matters. Chaque instant compte. Jeder Moment zählt. Cada momento importa. Kila wakati ni muhimu.

That’s Radisson Hotel Group’s brand promise. And with over 1,460 hotels in more than 100 countries, keeping that promise means speaking many, many languages. (That last one is Swahili.)

“We strive to deliver memorable moments in every interaction our customers have with our brand,“ explains Velit Dundar, Vice President for Global Ecommerce at the Radisson Hotel Group, “and our ’Yes I can’ service philosophy means we strive to speak to our customers in their own language,“ he adds, noting that the Radisson website is available in multiple tongues. “Ad personalization is at the heart of Radisson's mission because it demonstrates that we've anticipated and met each of our guest’s unique needs.“ 

 

But personalizing marketing content so it resonates with local audiences had become a linguistic and process automation challenge for Velit, who oversees digital advertising and decisioning at Radisson. Not only were existing workflows primarily manual, but customer information was siloed by region and scattered across multiple platforms. “That made it difficult to deliver the kind of one-to-one personalization at scale that’s key to enhancing both customer experience and customer lifetime value,“ he continues. 

So he turned to Google for a solution.

“Radisson Hotel Group wants to be at the forefront of adopting emerging technologies and implementing AI-driven solutions,“ Velit states. “Google is a pioneer in AI, and Google Cloud's scalable data platform and advanced AI tools made them an ideal partner.“  Continue reading on the Google Cloud Blog >>

Radisson.png
Velit Dundar.jpg
Delivery Hero.png
Mert Aydin.jpg
Delivery Hero accelerates code reviews and boosts code quality with Gemini Code Assist

Hankering for tapas in Madrid? Not enough time to get to o supermercado in Buenos Aires? Forget to buy flowers for the surprise party you’re throwing for Appa’s sixty-fifth birthday in Seoul? Then call Delivery Hero, a local delivery platform serving customers in nearly 70 countries on four continents, and it’ll be there in less than an hour.

Operating a local delivery platform on a global scale is no mean feat. Just ask Mert Aydin, principal software engineer at Delivery Hero, who oversees the optimization of software engineering and data science practices.

“Successfully connecting millions of customers with vendors and delivery people requires constant technological innovation,” Aydin explains. “We want to give our developers and data scientists top-of-the-line tools because a better user experience for them translates directly into the best user experience for our customers.” That means customers can find and order what they want more easily, and receive their orders more quickly.

So Delivery Hero added Gemini Code Assist to its software development toolbox to address critical challenges in the company’s code review workflows. Chief among them was how pull requests were handled and the errors that resulted.  

 

Generating pull requests and assigning reviewers had always been handled manually. That added friction to the coding lifecycle, impeded new feature delivery, and—most critically—allowed bugs to escape into software releases. In addition, vaguely written pull requests left reviewers puzzling out information about configuration, design, and library changes, further bogging down the development cycle.

“Even the most highly skilled engineers miss subtle issues like null or dangling pointers, typos, and naming inconsistencies,” Aydin acknowledges, “but subtle errors can have an outsize impact on platform reliability and customer experience, like ordering deliveries.” Aydin knew AI-enabled tools would excel in exactly those areas that needed improvement.  Continue reading on the Google Cloud Blog >>

Newsweek: Boosting user engagement with generative AI search, translation, and recommendations

Even while the rallying cry of “digital first“ continues to echo in the ears of executives at other legacy media companies, Newsweek’s Bharat Krish has tuned in to a more resonant solution: artificial intelligence.

When he joined as chief product officer, Newsweek had already completed the transition from a print-first to a digital-first model. “The challenge for digital-first publications is finding diverse and sustainable revenue streams beyond digital advertising,“ Krish, a seasoned media executive, observes. “Our content has to be readily accessible, and we have to keep readers engaged with it so we can monetize it in multiple ways.“

But maintaining and growing a strong digital presence has traditionally required significant resources and increased operational overhead. Krish sees AI as a powerful solution to streamline and enhance these efforts. “The next big challenge for media companies will be shifting from a digital-first to an AI-first mindset,“ he predicts.

At Newsweek, that shift has begun with the search function.  Continue reading on the Google Cloud Blog >>

Newsweek.png
Bharat Krish.jpg
Dresma 1.png
Dresma 3.png
Nishka Sinha 2.png
Dresma: AI photo and video generation up 400% and 60% faster on platform for large retailers

How many ecommerce websites did you visit today? And how many pages did you scroll through before you made a purchase? An estimated 600 million items are available from Amazon alone—each one competing for your attention with product photos and videos. That’s a heck of a lot of content.

“Even large brands struggle to keep up with the demand for product-related visuals,“ states Nishka Sinha, the co-founder and CMO of AI startup Dresma. “Our company's mission is to help brands leverage artificial intelligence to create all the visual content they need to convert browsers to buyers.“

Dresma’s platform tracks performance metrics such as impressions, engagement, reviews, and sales on platforms like Instagram, Pinterest, and Amazon, where their customers market and sell their products. By marrying this data to the product videos and photography in use, the platform determines what type of content will perform best.

That’s when Dresma’s AI tools go to work. By amalgamating content from a customer's digital asset management (DAM) system, its brand guidelines, the requirements of each website or marketing channel, and its own platform’s aggregated data, Dresma AI creates high-performing still photos and videos at scale. And for large retailers with thousands or even tens of thousands of SKUs as well as markets across the globe, the scale can be enormous, the variety vast, and the level of detail daunting.

“In product photography, there's no room for error, but the large language models we’d been using had trouble accurately reproducing attributes like textures and colors,“ Sinha acknowledges. Dresma also had concerns regarding the provenance of the training data used for those LLMs—an especially sensitive issue when synthesizing faces for fashion photography.

And its previous cloud provider couldn't keep up with the demands of a platform that processes 20,000 to 30,000 images and videos daily: The data-transfer bottlenecks and high failure rate on API calls to customers’ DAMs limited content production and impeded Dresma’s growth. “We needed an integrated platform with state-of-the-art LLMs and APIs backed by easy-to-execute workflow orchestration and scalable compute,“ she recounts. So Dresma turned to Google.  Continue reading on the Google Cloud Blog >>

Atropos Health: Gemini is up to 32% more accurate and >98% faster in healthcare content summarization

Your doctor makes treatment decisions based on clinical evidence. But not all evidence is equal.

There’s evidence established through double-blind randomized clinical trials and published in medical journals. There are opinions based on a clinician’s years of experience. And somewhere in between lie observational research studies: information based on raw, highly specific data compiled from millions of anonymized health records, insurance claims, genetic testing results, data from wearable devices, and other artifacts of day-to-day patient care.

Amassing, analyzing, documenting, and summarizing this “real-world evidence,“ or RWE, can be a herculean undertaking involving months of work for a team of healthcare professionals and data scientists. That’s where Atropos Health comes in.

“To manifest the value of real-world evidence, the data needs to be aggregated and converted from a wall of numbers in a database into something healthcare practitioners can easily understand,“ explains Neil Sanghavi, president and head of product at the five-year-old Series B startup. “Atropos Health uses Gemini models to generate and summarize real-world evidence studies from structured and unstructured data in a fraction of the time that humans can,“ he continues. “What used to take months to compile and publish is now available on demand in days or even minutes with our software.“  Continue reading on the Google Cloud Blog >>

Atropos Health.png
Neil Sanghavi.png
Dutch Bamboo _Customer stories.png
Brian Wennersten.png
Dutch Bamboo Foundation: Reversing climate change with a Gem of a grant writer and research assistant

If you had to choose one tactic for combating climate change, what would it be? For an American living in the Netherlands, the answer is planting bamboo.

“Like all plants, bamboo reduces atmospheric carbon, the main cause of global warming,“ explains Brian Wennersten, the founder—and sole employee—of the Dutch Bamboo Foundation. “But bamboo has a big advantage: It’s the fastest-growing plant in the world, and therefore absorbs more carbon more quickly.“

While most people associate bamboo with Chinese pandas, not Dutch environmentalists, that’s about to change if Wennersten is successful. His organization’s mission is to sow carbon-hungry bamboo plantations wherever they’ll grow on European soil, reduce carbon one vigorous stick of bamboo at a time, and put that woody fiber to myriad green uses worldwide.   Continue reading on the Google Workspace Blog >>

Saving lives with APIs: How UNOS manages the organ transplant network with Apigee

While accounts of skin grafts exist dating back as far as 1500 BCE, the first verified successful organ transplant was performed in 1954, when a Boston doctor replaced a man’s diseased kidney with a healthy one donated by his twin brother. Twenty-three years and many successful organ transplants later, the United Network for Organ Sharing, or UNOS, was established as the world’s first computer-based organ-matching system. 

Since then, UNOS has facilitated over one million organ transplants in the United States. In 2024 alone, more than 48,000 organ transplants were performed. 

 

There are 100,000+ people on the waiting list to receive an organ at any given time. But that “list“ is really more of a complex algorithm that considers dozens of variables — from a patient’s health status and location to the compatibility between donor and recipient. It’s managed by UNOS’s ever-evolving UNet platform, which ensures that donated organs are distributed efficiently and equitably, and collects post-transplant data on recipients to continuously improve the matching algorithms and, by extension, patient outcomes.  

 

To keep all that data flowing, UNOS works closely with 300 transplant hospitals, laboratories, organ procurement organizations, electronic healthcare and donor record companies, and medical review boards for up-to-the-minute medical, demographic, and availability information on organ donors and potential recipients. 

Underlying that dataflow is Apigee, Google Cloud’s API management platform, which orchestrates all API traffic that supports organ transplantation nationwide: a total of over 45,000 transactions per minute. Managing Apigee is a team of technologists led by UNOS's senior director of engineering, Michael Ghaffari, who has had a hand in the development of all UNOS products and applications in the nonprofit's technology infrastructure for the past 14 years.  Continue reading on the Google Cloud Blog >>

UNOS _2.png
Michael Ghaffari.jpg
levelbuild _2.png
levelbuild _1.png
Michael-Woitag.jpg
levelbuild cuts database costs 50% and brings predictive and sentiment analysis to construction firms

The words “construction site” conjure images of oversize backhoes and bulldozers, deafening rockbreakers and piledrivers, hardhats, and vast stockpiles of rebar, steel I-beams, scaffolding, bricks, and cement.

But underlying all that kinetic energy are planning documents, building codes, blueprints, project schedules, itineraries—and the armies of architects, urban planners, accountants, bankers, designers, and engineers who put it all in motion and keep building projects on track.

levelbuild is a Germany-based technology company whose no-code platform helps construction companies and engineering firms bring all the disparate applications and platforms used by those professionals together, using APIs, into a unified user interface that includes drag-and-drop tools for customizing complex project workflows.

It also centralizes all the documents and data associated with those apps and platforms—everything from standard back-office tools to industry-specific estimation, drafting, and project management systems—into an enterprise data warehouse, eliminating the data silos that have grown organically over time.

In 2024, when levelbuild embarked on a complete rebuild of their platform, they also rethought the foundation and infrastructure that supported it. “We were maxing out the capacity of our on-prem and virtual servers, and the company was continuing to grow,” recalls Michael Woitag, levelbuild’s CEO. “We needed a fast, secure, cost-effective, cloud-based solution that could scale along with our client base.”

Together with Google Cloud partner Seibert Group, levelbuild ran a pilot project to compare its platform’s performance on Google Cloud and other cloud providers. Woitag was impressed with the results.  Continue reading on the Google Cloud Blog >>

BLADE takes air travel to new heights with Google Workspace

If you need to fly from San Jose to Shenzhen, Dallas to Dubai, or Columbus to Kinshasa, BLADE’s chartered private jets will get you there. Have a shorter trip in mind? Take a seaplane from Manhattan to your summer getaway in the Hamptons, or a five-minute BLADE helicopter ride from Manhattan to JFK airport. BLADE will even fly you from New York to MetLife Stadium in New Jersey, or to the rooftop of an Atlantic City casino if you’re feeling lucky at roulette.

And Google Workspace has become the backbone for BLADE’s operations, which are often mission-critical: BLADE is also the largest American transporter of human organs for transplant, serving 40 hospitals in 20 states coast to coast.

“Aviation is time-sensitive and relies on accurate logistics—especially when patient care is involved—so we all need access to the same critical information in real time,” explains Lee Gold, chief of staff at BLADE. The BLADE team uses Google Docs, Sheets, Calendar, and Chat to coordinate flights, manage staff schedules, and discuss the needs of passengers awaiting flights in BLADE lounges. Continue reading on the Google Workspace Blog >>

Blade.png
Lee Gold.jpg
Ajax _1.png
Ajax _2.png
Ajax Systems enhances productivity and strengthens security with Google Workspace

Ajax Systems’ company motto—“Rule your space“—succinctly captures the seriousness with which they take their mission. 

An international technology company, Ajax Systems manufactures devices for intrusion protection, video surveillance, fire and life safety, and comfort and automation, as well as the software that controls them. Their portfolio of over 180 devices protects industrial, municipal, commercial, and residential customers: over four million individuals in more than 180 countries. 

“In a world where so many people feel unsafe, the team at Ajax Systems is focused on helping people feel more secure—at home, at work, and in everyday life,“ explains Nataliia Kondratenko, the head of business excellence. 

And their commitment to security goes beyond their products—it extends to their own internal computer networks and data as well.

“When we design products, we have to keep our customers’ data security as well as their personal safety top of mind, so of course we prioritize our own data security as well,“ explains Nataliia Kondratenko, who’s in charge of integrating digital solutions for automating business processes. “That’s one of the reasons we chose Google Workspace as our internal collaboration platform.“  Continue reading on the Google Workspace Blog >>

SAI360 boosts security, efficiency, and morale while cutting cloud costs 35%

“High-risk, high-reward” may be the mantra among certain high-flying investors and gamblers with money to burn.

But for large enterprises in highly regulated industries—think finance, healthcare, consumer packaged goods, and energy—“risk” is part of “governance, risk, and compliance,” or GRC. It’s something to be managed and minimized to keep products and consumers safe, and companies in line with laws that carry stiff penalties when not followed to the letter.

Every company has its own, distinct GRC workflows and strategies—intricate, carefully honed, and updated frequently. SAI360’s platform helps make those strategies successful.

“Our software’s ability to handle any level of complexity and adapt to any strategy or workflow makes it unique,” says Eric Fouarge, SAI360’s global head of engineering for cloud operations.

 

“As our tech stack grew to accommodate new platform features, we were hampered by the complexity inherent in how our cloud provider handled networking,” Fouarge concedes, “and they weren’t working with us to solve those problems.”

Needing to accelerate product development, and with a modernization project for legacy application stacks looming, he migrated SAI360’s infrastructure to Google Cloud.  Continue reading on the Google Coud Blog >>

SAI360 _1.jpg
SAI360 _2.jpg
Humana.png
Adam Nerell.png
Humana cuts IT costs and tightens security with 13,000-user Google Workspace migration

At Humana, taking care of business means taking care of people—and that’s not done sitting behind a desk.

“Most of our team are care workers out in the field, in customers’ homes or in designated care facilities,” explains Adam Nerell, the CIO of Humana, which is a leader in care provision services for families, the elderly, and the disabled in Sweden, Finland, and Norway. “Their focus is people, not computers, and they need easy-to-use software that lets them concentrate on caring for their clients.”

But instead of intuitive tools, the Humana team was bogged down by a suite of desktop-based office applications overstuffed with unused features. Worse still, 70% of IT’s time was spent maintaining and securing these memory-intensive applications on laptops and desktop computers. 

The Humana team could collaborate on documents, spreadsheets, and presentations—but not easily. “Working on the same documents offline and online led to syncing errors and version control nightmares,” Nerell recalls.  Continue reading on the Google Workspace Blog >>

Adwise: Unlocking the power of remote & online work with Google Meet, Gemini, and Logitech

“With freedom comes responsibility.“ 

While Eleanor Roosevelt penned this pronouncement from the world stage more than six decades ago, Leonie Kranenberg, Director of People and Culture at the Dutch marketing agency Adwise, still finds it relevant, fondly quoting her in the context of today’s hybrid workplaces.

“We offer our people the freedom to work where and when they want because that flexibility enables them to do their best work," explains Kranenberg. “But,“ she cautions, “collaboration is the anchor of our corporate culture and a key to our success.“

To help its team balance freedom and flexibility with their responsibilities to clients and one another, Adwise uses Google Meet with Logitech hardware under the Google Workspace ecosystem to provide seamless collaboration among team members, whether they’re in the office, working from home, or on a “workcation“ abroad.  Continue reading on the Google Workspace Blog >>

Vimeo.png
How Vimeo uses Gemini and Google Workspace to help customers create videos that move the needle

Vimeo understands the power of storytelling.

That’s why Vimeo is on a mission to simplify how the 287 million creatives, entrepreneurs, and businesses that use its platform can edit, manage, share, and monetize high-quality video content. 

“Video is the best way for people—and companies—to tell and share their stories,” notes Vimeo’s Yael Burla, who leads go-to-market strategy for AI products. “Video has transformed the way people live, work, learn, shop, and see themselves and others.”

Indeed, more than 350,000 videos are uploaded to Vimeo every day, and collectively, those videos have been watched more than 100 billion times, streamed live and on demand in more than 190 countries. And marketers of all types have long known that video is a powerful channel for building brand engagement.

But simplifying video-making and putting its power in the hands of the broadest range of customers takes intensive behind-the-scenes collaboration among Vimeo team members.  Continue reading on the Google Workspace Blog >>

bottom of page