BlobCity on Digital Ocean Marketplace

Using marketplace listing of BlobCity DB on Digital Ocean

👍

Create a BlobCity DB Droplet


COMING SOON

Once you create a BlobCity DB Droplet the Droplet boots up with an instance of the DB running on it. You will need to ssh into this Droplet to retrieve a randomly generated password for the root user of the DB.

SSH into Droplet

ssh root@<ip-address-of-droplet>

The username of the default user on the Droplet is root. Assuming you provided your SSH key while creating the Droplet, you should be able to directly access the Droplet using the above command.

Getting DB Access Credentials

Your BlobCity DB instance starts with a user called root already created. The password for this user is automatically generated at first boot and written to a text file located at /mnt/data/root-pass.txt.

CAT the root-pass.txt file to retrieve the randomly generated password to the root user.

cat /mnt/data/root-pass.txt

You may retain this password for accessing your database, but it is highly recommended you change your password upon first login.

Accessing DB CLI

The command line interface is the preferred choice for managing your database instance. You may connect to the CLI interface either from within the SSH session or from an external machine using the Droplet IP address. Telnet on port 10113 will allow you to access the CLI interface.

telnet localhost 10113
telnet <droplet-ip> 10113

You may also use nc <droplet-ip> 10113 if telnet is not installed on your computer. It is important to note that the telnet or nc connection is not secured, hence it is recommended that you always use the CLI from within an SSH session rather than using over the IP address.

Creating a Datastore & Collection

You must create a datastore and then create collections within the datastore to use the database. The CLI interface can be used to create both the datastore and collections within the datastore.

blobcity>create-ds test
Datastore successfully created
blobcity>create-collection test.my_collection
Collection successfully created
blobcity>create-collection test.my_collection2
Collection successfully created

You may now insert data into any of the created collections and fire SQL queries on data stored in the collections.

Inserting Data

Data can be inserted using the REST API, or data files can be loaded onto the server and then an import command maybe fired using the CLI.

The database can access data files located within the folder /mnt/data/{ds}/ftp. You will need to use scp, rsync or any other mechanism you are comfortable with to load your data files into this folder on the server.

The {ds} attribute must be substituted with the name of your datastore.

blobcity>import-csv test.my_collection /my_file.csv
Done in 37 (ms)

The above example imports the CSV file my_file.csv into collection my_collection. The first row of the CSV is assumed to contain column names. my_file.csv in this case must be placed at /mnt/data/test/ftp/my_file.csv on the server for the import operation to succeed.

Running SQL Queries

Once data is loaded, you may fire an SQL query on the collection to select your data. You can do this using the CLI or the REST interface.

blobcity>sql test: select * from `test`.`my_collection` where `col1` = 2

Mount a Storage Volume

While not necessary, it is highly recommend you setup a block storage volume to store your data. The database by default stores data onto the boot volume, which is susceptible to data loss on terminating the Droplet or if the Droplet crashes for any reason.

A block storage volume can be retained on terminating a Droplet and the same volume can be mounted on another Droplet you create, thereby allowing you flexibility and better protection against total data loss.

The volume must be XFS formatted for optimal performance, specially if you primarily plan to use the on-disk storage engine. XFS or EXT4 maybe used with almost equal performance if primary data storage is in-memory.

Mount the volume at /mnt/data. This will override any existing data you have placed inside the folder. If you desire to mount the volume at another location say /mnt/my_vol, then the database can be configured to write to this volume by changing the volume mount config in /opt/docker-compose.yml.