Automazione dei Flatfile: Gestire gli Annunci di Prodotto su Più Marketplace Amazon
Amazon's flatfile system is the most complete interface for bulk listing management across a large catalog, and one of the least well-documented for multi-marketplace operations. Each marketplace uses a different template version, enforces different field requirements, and applies different validation rules — often for the same product in the same category. Managing listings for ten marketplaces from a single product source requires a systematic approach to template variation, attribute translation, and the upload-validate-correct cycle.
This post describes the flatfile workflow we use for multi-marketplace catalog management: how templates differ, where automation provides the most leverage, and where manual intervention remains unavoidable.
What Flatfiles Are
A flatfile is a tab-delimited spreadsheet that Amazon uses to process bulk listing creation and updates. Each row represents a product. Each column represents an attribute. The template — downloaded from the Inventory > Add Products via Upload section of Seller Central — defines which columns are required, which are optional, and what the valid values are for each field.
The flatfile system predates the SP-API Listings API and remains the most complete way to manage certain attribute types that the API does not yet fully expose. Category-specific variations, browse node assignments, and certain compliance fields are still flatfile-only operations in several marketplaces. For any seller with more than a few dozen SKUs across multiple marketplaces, the flatfile workflow is not optional — it is the primary catalog management interface.
But in reality, the documentation Amazon provides for flatfiles is fragmented and often out of date. The most current specification for any given template is the template itself, downloaded fresh, for the specific marketplace and category you are managing. Relying on cached templates or documentation from six months ago is a reliable source of upload errors.
Template Differences Across Marketplaces
The structural differences between marketplace templates are larger than most sellers expect when they first attempt multi-marketplace operations. They are not simply translations of the same fields.
The US Clothing template for a given category may have fifty attribute columns. The equivalent German template may have sixty, including fields that have no US equivalent — specific chemical compliance declarations required under EU regulation, or energy label requirements that apply to certain product types. The Japanese template may structure size and color variations differently, using different variation theme names than the US version recognizes.
The valid values for a given field often differ by marketplace even when the field name is the same. The item_type_keyword field accepts different strings in .com versus .co.uk. color_name values are validated against a marketplace-specific list; a color name that passes validation in the US may be rejected in Germany or Japan because it does not match the approved value set for that market. These rejections are not always clear in the error report — you receive a generic attribute error rather than a direct statement that the value is not on the approved list.
Template versions update without notice. Amazon periodically releases new template versions for a category, adding required fields or deprecating old ones. If you are uploading with a template version that is no longer current, the upload may succeed but the listing may not index correctly — or certain attributes may be silently ignored. We download fresh templates at the start of any significant catalog update rather than reusing saved versions.
Size System Conversions
For apparel and footwear, size system conversion is one of the most labor-intensive aspects of multi-marketplace catalog management. The US, EU, UK, and Japanese size systems are not directly equivalent, and the conversion tables are not uniform across product types.
A women's shoe that is a US size 8 is a UK size 6 and an EU size 39, but a women's shirt that is a US medium may be an EU 38 or 40 depending on the brand's sizing conventions. There is no universal lookup table. The correct approach is to maintain a master size mapping per product type per brand, validated against the specific manufacturer's size guide.
The Amazon size_name and size_system fields must be populated together correctly for the listing to display properly in marketplace-specific browse filters. A listing uploaded with a US size in the size_name field but no size_system declaration will be treated as an unformatted string and will not appear in size-filtered search results. This is not always obvious from the upload confirmation — the listing creates successfully, but the size filtering does not work.
For the Japanese marketplace specifically, the size system differs from both European and US conventions for many product categories. Japanese clothing sizes use a separate numeric system. Shoe sizes use a centimeter-based system. The size_map_unit_of_measure field is required in the Japan template and is not present in the US or EU equivalents — a field whose absence causes silent processing failures when templates are reused across markets.
Backend Keyword Limits
Backend keywords — the search terms submitted in the generic_keyword field — differ in character limits across marketplaces, and the limit type differs as well.
Amazon.com enforces a 249-byte limit on the combined generic_keyword field. Amazon.de also enforces a byte-based limit, but German text contains more multibyte characters than English, which means a keyword string that fits within the US character limit may exceed the German byte limit when translated. Amazon.co.jp enforces limits that interact with Japanese character encoding — a single kanji character uses three bytes in UTF-8, so the effective keyword capacity in Japanese is substantially lower than the byte limit implies when measured in Latin characters.
But in reality, the keyword limit is one of the less impactful variables in multi-marketplace listing performance. We have observed that over-optimization of backend keywords — filling the field to the exact byte limit — produces diminishing returns relative to optimizing the visible title, bullet points, and description for natural language search behavior. The byte limits are a constraint to work within, not a performance lever to maximize.
The practical approach is to maintain a marketplace-specific keyword set for each product, sized to fit within the limits for that market, and to verify byte counts programmatically rather than by visual inspection. A simple Python function that encodes the keyword string and measures byte length against the marketplace limit catches truncation errors before they reach the upload step.
The Automation Workflow
The core insight behind flatfile automation is that the variation between marketplaces is mostly structural, not semantic. The product is the same — the dimensions, materials, intended use, and key features do not change. What changes is how that information must be represented in each marketplace's template format.
We maintain a master product data sheet with all attributes stored in a neutral format: dimensions in metric, sizes in the manufacturer's native system, keywords in the source language, categories mapped to a neutral internal taxonomy. Marketplace-specific flatfiles are generated from this master sheet by applying a transformation layer that handles the structural differences for each target market.
The transformation layer is a script — in our case written in Python — that maps each master attribute to the corresponding flatfile column for each marketplace, applies size conversions, translates category taxonomy to the marketplace's browse node structure, and enforces field-level validation before generating the output file. Fields that require manual review — translated copy, marketplace-specific compliance declarations, country-specific regulatory attributes — are flagged in the output rather than auto-populated.
The download-edit-upload cycle itself is semi-automated. Downloads are scripted via the SP-API Reports API, which exposes the GET_FLAT_FILE_OPEN_LISTINGS_DATA and related report types. Processing the download, comparing it against the master sheet, and generating the upload file is automated. The upload itself and the review of the upload processing report are manual steps — not because automation is impossible, but because the cost of an error in a large-scale upload is high enough to warrant a human review before files are submitted to Amazon.
Common Failure Modes
Upload errors fall into three categories: validation failures, processing failures, and silent failures.
Validation failures are the most straightforward — the upload report returns explicit error messages identifying the row and field that failed. The most common causes are invalid values for controlled vocabulary fields (size_name, color_name, item_type_keyword), missing required fields for the specific template version, and byte limit violations in keyword or description fields.
Processing failures occur when the upload passes validation but the listing does not update as expected. Common causes include variation relationship errors — a child ASIN submitted without a matching parent, or a parent submitted with a variation theme that does not match the child's attributes — and browse node conflicts where the submitted node does not match the existing classification for an already-established ASIN.
Silent failures are the most difficult to detect. The upload succeeds, the processing report shows no errors, but the listing behaves incorrectly. A size that does not appear in browse filters. A search term that does not index. A variation relationship that appears in the backend but does not render correctly in the storefront. These failures require comparing live listing attributes against submitted values through the Catalog Items API, not through visual inspection of the storefront.
The practical defense against silent failures is a post-upload verification step: pull the live listing data via API for each affected ASIN within twenty-four hours of upload, compare against the submitted values, and flag discrepancies. This step is not complex, but it is what separates a catalog management process that is reliable from one that requires constant manual auditing to catch unexplained listing changes that no error report ever announced.
Approfondimenti Correlati
- Expanding Amazon FBA to New Marketplaces: A Data-Driven Framework — the market selection framework that precedes listing management decisions
- Building an Amazon Data Warehouse with FastAPI and TimescaleDB — how we store and query catalog and listing performance data to inform update decisions