Inserting Data

{
  "username" : "root",
  "password" : "root",
  "ds" : "datastore1",
  "c" : "collection1",
  "q" : "insert",
  "p" : {
    "data" : ["XML","JSON",{"key":"json"},"Text"]
  }
}

The data being inserted can be of any of the supported formats by BlobCity. The database does an automatic format interpretation for JSON, XML and SQL data types. All other data types are stored as plain text in string unless the data type for the same is explicitly specified.

The data key is mapped to a JSON array of String. This allows for multiple records to be inserted in a single call. The records in the JSON array can be heterogeneous formats. JSON data can be put as a JSON object into the JSON array, instead of a being putting as a JSON String into the array. Both JSON record as is and a string form of the JSON record is supported. All other records must be passed as their string equivalents.

Explicitly specifying data format

"p" : {
  "type" : "csv",
  "data" : ["val1,val2","val3,val4"]
}

The format of data can be explicitly specified by adding the type parameter within the payload. This forces all data within the array to be of the specified type. Records that are not of the specified type will be skipped from the insert operation, while other records which are of the correct format will be inserted.

For CSV data, the column values within each record will be mapped to columns in the collection, in current order of the columns of the collection. The collection columns always follow the order in which the columns were created.

The supposed data formats as part of the type parameter are json, xml, sql, csv, text, auto. The auto type allows for heterogeneous data types within the array and requests for an automatic type detection of the data. The type parameter is checked in a case insensitive manner, and not specifying a type parameter default behaviour to same as specifying a type parameter of auto.

For sql type data only INSERT INTO SQL commands are interpreted. The data specified must be a SQL query string corresponding to a valid INSERT INTO SQL command.

Explicitly specifying column names for CSV data

"p" : {
  "type" : "csv",
  "data" : ["val1,val2","val3,val4"],
  "cols" : ["col1","col2"]
}

For CSV data the column names can be explicitly specified for the import. The column names specified are the ones that will be used for all records, and all records must have the same number of columns as the number of column names provided. Typically when

processing a CSV file, the first row of the file is likely to contain the column names, which should be passed to the cols parameter in the request.

Response Structure

{
  "ack" : "1",
  "time" : 1000,
  "p" : {
    "status" : [1,0,1,1],
    "ids" : ["id1","","id3","id4"],
    "inserted" : 3,
    "failed" : 1
  }
}

The response throws out the status for each item that was attempted to be inserted. The status is an array of 1 and 0 numbers, where the number 1 indicates that the corresponding record was successfully inserted, while the number 0 indicates a failed insert. The status array is guaranteed to comprise of the same number of elements as the insert request with the response status being in the same order as the order of the records in the request.

The ids of all records are also returned back. BlobCity by default produces an auto generated record id, unless one is specified as a _id field in the data being inserted. Assuming the record does not specify any _id, a blank value is returned for each failed record, while the internally auto generated _id is provided for all successfully inserted records. For a record that explicitly specifies an _id, but fails in inserted due to some user configured condition or formula restriction, the specified _id will be provided in the response along with the status of 0 for the corresponding record. A sample payload for one failed record with a manually specified _id in the request is shown below.

"p" : {
  "status" : [1,0,1,1],
  "ids" : ["id1","id2","id3","id4"],
  ...
}