Figma Launches AI-Powered Tools With Strong Focus on Privacy


Figma has launched a suite of AI-powered features designed to improve the efficiency and creativity of design workflows. Collectively known as Figma AI, these tools offer inspiration, explore diverse design directions, and automate repetitive tasks.

“All of the generative features we’ve launched to date are powered by off-the-shelf third-party AI models and have not been trained on private Figma files or customer data,” according to the company. The announcement highlights the use of publicly available community files to refine visual search and asset capabilities.

Figma said it plans to create new models that better integrate with the platform’s specific concepts and tools. “To achieve these improvements, we need to train models that better understand Figma’s design concepts and patterns, as well as its internal formats and structure, using Figma content,” the company said.

Data privacy and security are key components of Figma’s AI strategy. The company reiterated, “Our model development process is designed to protect your privacy and confidential information.” Several measures are in place, including encryption of all data in transit and at rest, custom permissions, and strict user access controls. Additionally, third-party vendors are not allowed to use uploaded data for training their own models and are limited in how long they can store user data. For example, OpenAI and other third-party vendors only store data temporarily or not at all, as needed to process requests and enable AI features.

To further ensure privacy, data used for model training is anonymized and aggregated. Figma noted, “We do not use any data from Figma for Education or Figma for Government accounts for model training.” The company also offers users the ability to share their content for AI model training, requiring administrative control to initiate this sharing by August 2024. Examples of customer content include text, images, comments, annotations, and layer properties.

Figma distinguishes between customer content and usage data. The latter concerns metrics about how the platform is accessed and used, but does not include user content itself. This type of data is used in an aggregated and anonymized format to ensure user privacy. Examples of usage data include information about how often content is accessed and various technical logs.

Admins have two new settings to manage AI usage and content training. These settings can be adjusted at any time to control AI features and access to content training. AI features are enabled by default for all plans, while content training settings vary. For example, content training is enabled by default for Starter and Professional plans, while for Organization and Enterprise plans, it is disabled by default.

“These are team-level settings for Starter and Professional plans and organization-level settings for Organization and Enterprise plans,” Figma explained. However, the content training setting will not go into effect until August 2024. Until then, no content training will take place. After implementation, new content and changes will be subject to the chosen settings, unless admins choose to disable content training, thereby excluding the use of new inputs for model training.

As for the Figma community, the company emphasized its appreciation for community creators. It confirmed that free files within the community are governed by CC BY 4.0 licenses, which allow for transformation but require attribution. Figma does not currently train generative models that generate designs based on community files, though it does use free public files for specific needs like semantic and visual search improvements. Paid files remain excluded from AI model training efforts.

Leave a Comment