Import CSV into Firebase Firestore

Import CSV into Firebase Firestore

Cloud Firestore makes some things harder than they should be! One of them is importing your data. There’s no way to import collections or documents using the Firebase Console. Previously you had to write a custom script that parsed the CSV file, iterated over the rows and created Firestore documents. Luckily now there’s Firefoo, the Firestore GUI that has CSV import built in!

Upload CSV as Firestore Collection using Firefoo

1
Download and install Firefoo
2
Right-click your project in the sidebar and choose Import Collections
default
3
Click on the Data File field and select your CSV file
default
4
That's it! Click the Import button!
Your CSV rows are imported into Firestore and a progress popup opens. The import will continue to run in the background when you close that popup. Get back to the progress popup again through FileTasks.

Options

Document IDs

By default Firefoo will auto-generate random 20-character strings (e.g. vBEu6dxicQ0izOxQoRdl) as document IDs. Select use column to use the values of a CSV column as document IDs. Make sure that this column contains unique values, otherwise documents with the same ID value will be overwritten. These values must follow the constraints on document IDs, most notably they must not contain slashes.

Columns

To omit specific columns from the import, unselect the checkbox to the left of the table.s

Field Names

For every column in the original CSV data, you can specify the target field name for the Firestore documents. Use the Change Field Name Format to quickly change the format of all field names. Non-alphanumeric characters should be escaped with backticks around the field name, e.g. `field/with/slashes`. These backticks will not be part of the final Firestore field name. Dots in the field name will nest the fields inside a Firestore Map:address.street will create a street field inside of an address Map. Escaping and nesting can be combined: `my-map`.`my.field` creates a my.field field in a my-map Map. Read more about Firestore field path constraints in the Google Cloud documentation.

Data Types

Firefoo analyzes the first 2000 lines of the CSV file and suggests target types for every column of the CSV, listed in the Allowed Types column. To change the allowed types for a column, click on the type in the table. It is possible to allow multiple types.default

Fallback (when String is not allowed)

When String is not allowed, you can specify the fallback: what should happen if the CSV value cannot be parsed as one of the allowed types?
  • Leave field undefined: Will not create a field at all.
  • Set field to null: Will assign the Firestore Null value to the field.
  • Parse as string: Will create the Firestore String with the CSV value.
  • Skip document: Skips the whole document.
Empty values will always result in this fallback behavior.

Emtpy Strings (when String is allowed)

When String is one of the allowed types, you cannot specify a fallback behavior, because everything can be parsed as a String. Instead, you can specify how empty values are handled:
  • Leave field undefined: Will not create a field at all.
  • Set field to empty string: Creates a String field with an empty value.

Parsing Types

If you generate the CSV manually, make sure to escape the values properly. Commas and double quotes must be escaped, e.g.: Need to escape in raw CSV because of commas (and quotes).

String

  • commas and double quotes need to be escaped: a"asd"aadee

Boolean

  • parsed as false: FALSE, NO, 0 (case insensitive)
  • parsed as true: TRUE, YES, 1 (case insensitive)

Integer

  • 64-bit signed integer, e.g. 0, 12, -34

Double

  • parsed as value: 0, 12.34, -137.5, 24e3
  • parsed as NaN: NAN (case insensitive)
  • Infinity: INF, +INF, INFINITY, +INFINITY (case insensitive)

Timestamp

Null

  • NULL, Null, null

Array

Multiple columns with the same prefix, each ending in a dot and consecutive integers:myField.0, myField.1, myField.2 , JSON-Formatted values in CSV: ["abc", "def", "ghi"] . See Firefoo JSON Export (TODO link) for full specification. An array cannot directly contain other arrays, which is a Firestore limitation.

Map

  • Either two columns with dot notation: myMap.field1, myMap.field2
  • Or JSON-Formatted values in CSV: {"abc": 1, "def":true, "ghi":"turtle"}. See Firefoo JSON Export format

Document Reference

Path to a document, with or without leading and trailing slashes: /myCollection/myDoc ormyCollection/myDoc/mySubcollection/myNestedDoc/

Geopoint

Two columns in the CSV: myField.__lat__, myField.__lon__ result in one myField Geopoint field in Firestore.

Preview

In the Preview Tab, you can see how your data will look like before starting the import. Geopoints and Document References are shown in the Firefoo JSON format here, but they will become the native Firestore types after the import.default

FAQ

How many CSV rows can I import?

There is no limit, you can import millions of documents into Firestore! The CSV file is read line by line and uploaded in small batches. That way neither the memory of your machine, nor your internet connection can restrict the import.

Import CSV data into an existing Firestore collection

Instead of creating a new collection from the CSV data, it’s also possible to add the data into an existing collection. Right-click on the collection in the sidebar to do so and selectImport from there. Make sure to adjust the Field Names in the field mapping table so that they match the structure of the existing documents in the collection. When you choose to use document IDs from a CSV column, and there‘s already a document existing with the same ID, the data will be merged with imported fields overwriting existing fields.

Import CSV into a subcollection

To import the data from the CSV file as a subcollection into a document, specify the path to that subcollection in the Target Path field, e.g. /my_coll/my_doc/my_subcoll. This works for existing subcollections as well as creating new subcollections.

How many requests will this take away from my quota?

Firefoo uses exactly one write request for every row of your CSV file, as every row corresponds to one document.