Baqend Cloud hosts your application data and business logic and delivers it over a global caching infrastructure for performance at the physical optimum.

With Baqend, you use a fully managed backend service with an automatically accelerated JavaScript API directly from your application (e.g. written in Angular or React). As the platform provides a rich set of turnkey features and takes over the responsibility for backend performance, major development efforts are saved.

In terms of architecture Baqend gives you the hosting of your application (e.g. HTML and JS files) plus the APIs for backend concerns such as data storage, queries, push, OAuth, user management, access control and server-side business logic:

Note: If you have any questions not answered by this guide, feel free to contact us via support@baqend.com or the chat on the bottom.

Getting Started

These are our recommendations for getting things rolling quickly:

Baqend JS SDK

The JavaScript SDK is packaged as an UMD module, it can be used with RequireJS, browserify or without any module loader. To get started please install the Baqend SDK with npm or bower or download the complete package from GitHub.

Note: If you are not using JavaScript you can use Baqend via its REST API from the programming language of your choice. Baqend's REST API is documented with Swagger and can be explored here. In the dashboard of you Baqend app you can goto "API Explorer" to explore and use the REST API of your own instance.


To install Baqend, just add our CDN-hosted script in your website (available both over HTTPS and HTTP).

<script src="//www.baqend.com/js-sdk/latest/baqend.min.js"></script>

For additional setup information visit our GitHub page.

Tip: If you use our Starter Kits the Baqend SDK is already included and you can skip this setup.
Note: It is generally a good idea to use the latest SDK version from //www.baqend.com/js-sdk/latest/baqend.min.js in development to always be up-to-date. In production, however, you should use the last exact version you tested with. Be aware that otherwise minor changes in a newly released version may break parts of your production application. See our latest changes to the SDK.

The Baqend SDK is written and tested for Chrome 24+, Firefox 18+, Internet Explorer 9+, Safari 7+, Node 4+, IOS 7+, Android 4+ and PhantomJS 1.9+

The Baqend SDK does not require any additional dependencies, however it is shipped with a few bundled dependencies:

The Baqend JavaScript SDK and all its bundled dependencies are shipped under the MIT License.

To see that Baqend is working, paste the following after the Baqend script tag. It will replace the HTML body with 5 raw todo items from the tutorial application. Delete the snippet afterwards.

  DB.connect('toodle').then(function() {
    return DB.Todo.find().limit(5).resultList();
  }).then(function(result) {
    document.querySelector('body').innerHTML = "<pre>" + JSON.stringify(result, null, " ") + "</pre>";

Baqend + Node.js

The Baqend SDK is fully compatible with Node.js. This means you can use the SDK in a Node.js-based application for saving data, logging in users, etc. Additionally Baqend modules and handlers are based on Node.js and run and scaled automatically by Baqend.

To install the SDK for a Node.js project do an npm install --save baqend and use require('baqend') in your code.

var DB = require('baqend');

The Baqend SDK is compatible with Require.JS, Browserify, ES6 and TypeScript and all majors build tools (Gulp, Grunt, Webpack, NPM scripts, etc.).

Connect your App to Baqend

After including the Baqend SDK in your app, connect it with your Baqend. Simply call the connect method on the DB variable:

//connect to the example app
//Or use a TLS-encrypted (SSL) connection to Baqend
DB.connect('example', true);
Note: If you use a custom deployment, e.g. the Baqend community edition you must pass a hostname or a complete URL to the connect call: DB.connect('https://mybaqend.example.com/v1')

You can pass a callback as a second argument, which will be called when the connection is successfully established.

DB.connect('example', function() {
  //work with the DB

Behind the scenes Baqend is requested, the metadata of your app is loaded and the Data Models are created and initialized. If you want to register the handler afterwards, you can use the ready method to wait on the SDK initialization.

DB.ready(function() { DB... //work with the DB });

If you are familiar with Promises you can alternatively use the returned promise instead of passing a callback. This works for all places in the Baqend SDK that exhibit asynchronous behaviour.

DB.ready().then(function() {
  DB... //work with the DB
Tip: Baqend not only gives you APIs for serverless development but also hosts and accelerates your assets, like HTML, CSS, images, etc. See Hosting for more details.

The Baqend JavaScript SDK works best for:

Though Baqend does not make any assumptions on the tooling, here a the tools we most frequently see used with Baqend:

Accessing Data

After the Baqend SDK has been successfully initialized, all defined classes can be accessed using the DB instance. Just use the name of the class to access the object factory.

DB.ready(function() {
  DB.Todo //The Todo class factory

The object factory can be called or can be used like a normal JavaScript constructor to create instances.

var todo = new DB.Todo({name: 'My first Todo'});

The constructor accepts one optional argument, which is a (JSON-)object containing the initial values of the object.

The object attributes can be accessed and changed by their names.

var todo = new DB.Todo({name: 'My first Todo'});
console.log(todo.name); //'My first Todo'
todo.active = true;


Promises are a programming paradigm to work with asynchronous code. Primarily used for communication and event-scheduled tasks it makes code much more readable then the callback-based approach. A Promise represents the public interface for an asynchronous operation and can be used to chain tasks that depend on each other.

The Baqend SDK supports both paradigms, therefore each asynchronous method accepts an optional success and an error callback and returns a Promise for further tasks.

Basically there are two common ways to initialize a Promise. You can create a new instance of a Promise with an executor function. With the given resolve and reject function it can decide if the promise should be fulfilled with a given value or should be rejected with an error.

var promise = new Promise(function(resolve, reject) {
  var delay = Math.random() * 2000 + 1000;
  window.setTimeout(function() {
  //We fulfill the promise after the randomized delay
  }, Math.random() * 2000 + 1000);

The second way is to create an already resolved Promise with a given value.

var promise = Promise.resolve(200);

If you want to listen for the outcome of such a promise you can register a onFulfilled and a onRejection listener with the then(onFulfilled, onRejected) method of the promise. When the promise gets resolved, the onFulfilled listener is called with the fulfilled value. In case of rejection the onRejected listener is called with the error.

promise.then(function(value) {
  console.log('We have waited ' + value + 'ms');
}, function(e) {
  console.log('An unexpected error with message: ' + e.message + ' occurred.');

The Promise.then method returns a new promise which will be resolved with the result of the passed listener. The listener itself can also perform asynchronous operations and return another promise which will then be used to resolve the outer Promise.

promise.then(function(value) {
  return anotherAsyncTask(value);
}).then(function(anotherValue) {
  //will only be called if the first promise 
  //and the anotherAsyncTask's Promise is fulfilled
  //the anotherValue holds the fulfilled value of the anotherAsyncTask

For additional examples and a more detailed explanation consult the MDN Promise Documentation.

The Baqend SDK uses the promise-based approach for the entire documentation since the code is more readable and is generally considered the best way to work with asynchronous code in JavaScript.

Baqend Dashboard

The Baqend dashboard is the main tool, which you will use to manage and configure your Baqend instance. After you have created your first app, you have in the left navigation bar a quick overview over all the configurable and usable functionalities of Baqend.

Here is a quick overview of those:

Baqend Modules - can be used to create Baqend code, which can later be called by your app to execute trusted business logic. See also Baqend Modules. By clicking the + you can create new modules, Afterwards a module code template will be opened.

Tables - are the part where you can create and extend the data model of Baqend to fit your app requirements. By clicking on the class name, you can view and edit the table content and its metadata like schema, access rules and code hooks. Each table is represented by one entity class and each row is an instance of this class in the SDK. On the upper right side you can navigate with the tabs through those categories:

There are three predefined classes which you can also extend with custom fields:

Additionally you can create a new custom classes with a click on the + button near the Data label. Type a none used name and hit enter. The schema view will appear and you can begin to model your own class schema.

Logs - Here you can view the logs generated by accessing the api and your application logs.

API Explorer - The API Explorer provides a GUI to serve the underlying REST API of Baqend. Here you can explore and made direct HTTP calls to your Baqend server.

Settings - Her you can configure additional settings of your Baqend app like:

Baqend CLI

The CLI (Command Line Interface) provides a simple way to:

The Baqend CLI can easily be installed globally with npm install -g baqend (to get npm you just need to have Node.JS installed). Afterwards you can use the CLI by typing baqend --help in any folder.

Note: Ensure that your PATH system enviroment variable contains the global npm bin path ($ npm bin -g) to let npm installed commands work properly.

Baqend CLI

Tip: A good way to manage a Baqend-based project is to manage the files and collaboration via git and using the CLI to deploy files and code to Baqend.

The Baqend CLI is automatically shipped with our SDK. You can use the Baqend CLI directly in any npm script. Therefore add a Baqend script entry to the scripts section in your projects package.json

  "scripts": {
    "baqend": "baqend"

Afterwards you can type npm run baqend -- --help

Note: The extra -- are required to seperate the npm run arguments from the Baqend ones.

Login and Logout

Before you can actually deploy assets and code to your app, you must login the CLI to your Baqend account. By typing baqend login you can save your login credentials on your local machine.

If you do not want so save your login credentials , you can skip the login step and provide the login credentials each time you deploy.

Note: If you have created your Baqend account with OAuth (Google, Facebook or GitHub) you must add a password to your account first. This can be done in the account settings of the dashboard.

You can logout the Baqend CLI and remove all locally stored credentials by typing baqend logout


With the deploy command you can upload your static files and assets as well as Baqend code (handlers and modules) to your Baqend app:

$ baqend deploy your-app-name

We expect a folder named www by default that is uploaded to the Baqend www folder and served as a website.

Read more about Baqend Hosting in the Hosting chapter.

Tip: You can provide a different web folder to upload with baqend deploy --file-dir dist.

The CLI can additionally deploy your Baqend code. Baqend code should be located in an folder named baqend. The following screenshot visiualizes a typical project layout including Baqend code.

CLI Project Layout All Baqend modules should sit top level within the baqend folder. For example, baqend/firstModule.js will be uploaded as firstModule.

For each code handler you should create a folder named similar to the table it belongs to. Within the folder the files should be named:

baqend/<Table>/insert.js for an onInsert handler
baqend/<Table>/update.js for an onUpdate handler
baqend/<Table>/delete.js for an onDelete handler
baqend/<Table>/validate.js for an onValidate handler

Therefore baqend/User/insert.js contains the insert handler code wich is invoked each time a new user object is inserted to the User table.

Read more about Baqend code in the Baqend Code chapter.

Typings (TypeScript Support)

The Baqend SDK itself comes with a TypeScript declaration file, which enables seamless integration into TypeScript and allows better code completion. The SDK comes with a dynamic API part, which is generated on the fly depending on your current schema. To make your TypeScript application work properly with this dynamic part you can generate the additional typings for your current schema with the CLI.

With baqend typings your-app-name the CLI generates the TypeScript declaration file in the current folder. You can then add the generated file to your tsconfig.json file.

You can update the generated file each time you have changed tables or fields in the Baqend Dashboard by just repeating this step.

Tip: You should check the generated file into your version control system to share an up-to-date version of the definition file.


Each entity has some basic methods for persisting and retrieving its data. This pattern is known as data access objects (DAO) or active records and is common practice in persistence frameworks.


After creating a new object, the object can be persisted to Baqend with an insert() call. The insert call ensures that the object always get its own unique id by generating a new one if none was provided.

var todo = new DB.Todo({id: 'Todo1', name: 'My first Todo'});
//we can use the object id right now
console.log(todo.id) //'Todo1' 

todo.insert().then(function() {
  console.log(todo.version); //1


If an object is persisted it can be loaded by its id (aka primary key). This method is very handy with custom (i.e. non-generated) ids.

DB.Todo.load('Todo1').then(function(todo) {
  console.log(todo.name); //'My first Todo'  

If an object is loaded from the Baqend all its attributes, collections and embedded objects will be loaded, too. References to other entities will not be loaded by default. You can, however, specify an optional depth-parameter to indicate how deep referenced entities should be loaded:

DB.Todo.load('Todo1', {depth: 1}).then(function(todo) {
  // With 'depth: 1' all directly referenced objects will be loaded.

When you load the same object a second time, the object will be loaded from the local cache. This ensures that you always get the same object instance for a given object id.

DB.Todo.load('Todo1').then(function(todo1) {
  DB.Todo.load('Todo1').then(function(todo2) {
    console.log(todo1 === todo2); //true


After having loaded an instance and having done some modifications, you usually want to write the modifications back to Baqend.

todo.name = 'My first Todo of this day';
return todo.update();

The update() method writes your changes back to Baqend, if no one else has already modified the object. To detect concurrent object modifications, each entity has a version. Every time changes are written back to Baqend the versions will be matched. If the version in the Baqend differs form the provided version, the object was modified by someone else and the changes will be rejected, since the object is outdated, i.e. a concurrent modification occurred. This is called optimistic concurrency control: changes are performed locally and then sent to the server and only in the rare event of a consistency violation the change operation is rejected.

todo.name = 'My first Todo of this day';
return todo.update().then(function() {
  //the todo was successfully persisted
}, function(e) {
  //the update was rejected. Do we want to reapply our changes?
Note: When you try to update an already deleted object, it will also be treated as a concurrent modification and the update will be rejected.

There are also some situations where we would like to omit this behaviour and force a write of our changes. To do so the force option can be passed to the update method. Be aware that this last-writer-wins-scheme may result in lost updates.

todo.name = 'My first Todo of this day';
  //force the update and potentially overwrite all concurrent changes
return todo.update({force: true}).then(function() {
  //the todo was successfully persisted

Each object also automatically keeps track of its creation time and the last time it was updated in form of DateTime fields. Both of these fields are maintained automatically and are read only, i.e. you can not change them yourself.

todo.name = 'My first Todo of this day';
return todo.update().then(function(updatedTodo) {


You can delete an object by calling its delete() method. It will delete the entity from Baqend and drop the entity from the local cache.

todo.delete().then(function() {
  //the object was deleted
}, function() {
  //a concurrent modifications prevents removal

Just like the update() method, delete() matches the local version with the version in the Baqend and deletes the object only if the version is still up-to-date.

Again, you can pass the force option to bypass the version check.

todo.delete({force: true});


As you have seen in the previous examples you can insert() new objects and update() existing objects. If it is irrelevant if the object is already persisted to the Baqend just use the save() method. This either performs an update or an insert, depending on the current state of the object.

var todo = new DB.Todo({id: 'Todo1', name: 'My first Todo'});
todo.save().then(function() { //inserts the object
  todo.name = 'My first Todo of this day';
  todo.save(); //updates the object

Concurrency with Optimistic Saving

Without the explicit force flag, updates and saves can fail due to concurrent operations performed on the same object. With the òptimisticSave` method you can conveniently specify the retry logic to apply, if the update fails. Even under high concurrency one writer will always succeed so that the system still makes progress.

Under the hood, this pattern of optimistic concurrency control relies on version numbers of the objects and conditional HTTP requests that only apply changes when the underlying object has not been changed.

var todo = new DB.Todo.load("myTodo");
todo.optimisticSave(function(todo, abort) {
  //this method may get called multiple times
  if(todo.participants.length > 10) { 
    //you can specify when to stop reytring
  todo.participants.push("Johnny"); //apply a change --> will be saved automatically
Tip: Optimistic saving is particularly useful for server-side code (modules) that updates objects and may be invoked concurrently.

Load / Refresh

Sometimes you want to ensure, that you have the latest version of an previously loaded entity, for example before performing an update. In that case you can use the load({refresh: true}) method of the entity to get the latest version from Baqend.

//updates the local object with the most up-to-date version
todo.load({refresh: true}).then(function() { 
  todo.name = 'My first Todo of this day';   
  todo.save(); //updates the object

While performing an insert or update, you can also refresh the object after performing the operation. To do so you can pass the refresh flag to the insert(), update() or save() method.

todo.save({refresh: true}).then(...); //refreshing the object after saving it

This option is very useful if you have a Baqend Code update handler which performs additional server-side modifications on the entity being saved. By passing the refresh flag you enforce that the modification will be loaded from the Baqend after the entity has been saved.

Schema and Types

Behind each object persisted to and loaded from Baqend there is a schema which describes the structure of its instances. It specifies which attributes of an object will be tracked and saved (e.g. Todo.name, their types (e.g. String and optionally constraints (e.g. not null).

The types that Baqend supports can be classified in five categories.

Data Modelling

Here is an example for creating the data model of Todo objects in the dashboard:

Under the hood, Baqend stores data in MongoDB. However, in contrast to data modelling in MongoDB, Baqend supports a rich schema that is checked and validated whenever data ist stored. By using the JSON data types Baqend objects can have arbitrary schemaless parts.

Tip: Best practices for schemaless and schema-rich data modelling can both be applied in Baqend by mixing data types with JSON.

Embedding vs Referencing

The major decision when modelling data in Baqend is the choice between embedding and referencing.

With embedding, related content is stored together. This is also called denormalization as the data might be duplicated in multiple places. Embedding is useful for:

The advantage of embedding is that data can be read in one chunk making retrieval more efficient. The downside is that whenever embedded objects are contained in multiple parent objects, more than one update has to be made in order to keep all instances of the embedded object consistent with each other.

With referencing, dependent data is not embedded, but instead references are followed to find related objects. In the world of relational database systems this is called normalization and the references foreign keys. Referencing is a good choice if:

The downside of referencing is that multiple reads and updates are required if connected data is changed. With the depth-parameter you can, however, load and save entities with all its references. See references.

Entity Objects

In general there are two types of objects. The first type - Entities - are those objects which have their own identity, version and access rights. They can be directly saved, loaded and updated. Each entity has its own unique id. The id is immutable and set at object creation time.

var todo = new DB.Todo({name: 'My first Todo'});
console.log(todo.id); //'84b9...'

Instead of relying on automatic generation, objects can also have a custom id. This allows to assign ids that are memorable and meaningful.

var todo = new DB.Todo({id: 'Todo1', name: 'My first Todo'});
console.log(todo.id); //'Todo1'
Note: The save call will be rejected, if the id already exists!


Entity objects can reference other entities by reference, i.e. their id. Referenced objects will not be persisted inside another entity, instead only a reference to the other entity is be persisted.

var firstTodo = new DB.Todo({name: 'My first Todo'});
var secondTodo = new DB.Todo({name: 'My second Todo'});

firstTodo.doNext = secondTodo;

To save a reference, you just call the save() method on the referencing entity.

//the todo instance will automatically be serialized to a object reference

Internally, the reference is converted to a string like /db/Todo/84b9... and persisted inside the referencing entity. The referenced entity will not be saved by default. You can pass the depth options flag to the save the complete object graph by reachability.

//will also save secondTodo, since it is referenced by firstTodo
firstTodo.save({depth: true});

When an entity is loaded from Baqend, referenced entities will not be loaded by default. Instead an unresolved entity (hollow object) is set for the referenced entity. If you try to access attributes of an unresolved entity, an object is not available error will be thrown.

//while loading the todo, the reference will be resolved to the referenced entity
DB.Todo.load('7b2c...').then(function(firstTodo) {
  console.log(firstTodo.name); //'My first Todo'
  console.log(firstTodo.doNext.name); //will throw an object not available error

The isReady field indicates if an entity is already resolved.

DB.Todo.load('7b2c...').then(function(firstTodo) {
  console.log(firstTodo.doNext.isReady); //false

Calling load() on an unresolved entity resolved it, i.e. the referenced object is loaded.

firstTodo.doNext.load(function() {
  console.log(firstTodo.doNext.isReady); //true
  console.log(firstTodo.doNext.name); //'My second Todo'

If the object graph is not very deep, references can easily be resolved by reachability.

//loading the todo will also load the referenced todo
DB.Todo.load('7b2c...', {depth: true}).then(function(firstTodo) {
  console.log(firstTodo.name); //'My first Todo'
  console.log(firstTodo.doNext.name); //'My second Todo'

For further information on persisting and loading strategies see the Persistence chapter.

Embedded Objects

The second type of objects are embedded objects. They can be used within an entity or a collection like a list or map. They do not have an id and can only exist within an entity. Embedded objects are saved, loaded and updated with their owning entity and will be persisted together with it. Embedded objects thus have the structure of a object but the behaviour of a primitive type (e.g. a String). This concept is also known as value types, user-defined types or second class objects.

Embedded objects can be created and used like entity objects.

var activity = new DB.Activity({start: new Date()});
console.log(activity.start); //something like 'Tue Mar 24 2015 10:46:13 GMT'
activity.end = new Date();

Since embeddables do not have an identity, they hold neither an id, version nor acl attribute.

var activity = new DB.Activity({start: new Date()});
console.log(activity.id); //undefined

To actually persist an embedded object you have to assign the embedded object to an entity and save that outer entity.

var activity = new DB.Activity({start: new Date()});
var todo = new DB.Todo({name: 'My first Todo', activities: [activity]});


Primitives types are the basic attribute types and known from programming languages. Whenever an entity is saved, all attribute values will be checked against the types described by the schema. This is one of the biggest advantages of having a schema: data cannot easily be corrupted as its correct structure is automatically enforced by the schema. Please note that the JSON data type gives you full freedom on deciding which parts of a object should be structured and which parts are schema free. The following table shows all supported attribute types of Baqend and their corresponding JavaScript types.

Baqend Primitive JavaScript type Example Notes
String String "My Sample String"
Integer Number 456 64bit integer. Fractions are deleted
Double Number 456.456 64bit floating point numbers
Boolean Boolean true
DateTime Date(<datetime>) new Date() The date will be normalized to GMT.
Date Date(<date>) new Date('2015-03-15') The time part of the date will be stripped out.
Time Date(<datetime>) new Date('2015-01-15T13:30:00Z') The date part of the date will be stripped out and the time will be saved in GMT.
File File(<fileId>) new File('/file/www/my.png') The file id points to an uploaded file.
GeoPoint DB.GeoPoint(<lat>, <lng>) new DB.GeoPoint(53.5753, 10.0153) You can get the current GeoPoint of the User with GeoPoint.current(). This only works with an HTTPS connection.
JsonObject Object {"name": "Test"} Semistructured JSON is embedded within the entity. Any valid JSON is allowed.
JsonArray Array [1,2,3]


Collections are typed by a reference, embedded object class or a primitive type. The Baqend SDK supports 3 type of collections, which are mapped to native JavaScript arrays, es6 sets and maps:

Baqend Collection Example Supported element Types
collection.List new DB.List([1,2,3]) or
new Array(1,2,3)
All non-collection types are supported as values
collection.Set new DB.Set([1,2,3]) or
new Set([1,2,3])
Only String, Boolean, Integer, Double, Date, Time, DateTime and References are allowed as values. Only this types can be compared by identity.
collection.Map new DB.Map([["x", 3], ["y", 5]]) or
new Map([["x", 3], ["y", 5]])
Only String, Boolean, Integer, Double, Date, Time, DateTime and References are allowed as keys.
All non collection types are supported as values.

For all collection methods see the MDN docs of Array, Set and Map


To retrieve objects by more complex criteria than their id, queries can be used. They are executed on Baqend and return the matching objects. The Baqend SDK features a query builder that creates MongoDB queries under the hood. It is possible to formulate native MongoDB queries, but using the query builder is the recommend way: it is far more readable and does all the plumbing and abstraction from MongoDB obscurities.

resultList, singleResult and count

The simplest query is one that has no filter criterion and thus returns all objects. The actual result is retrieved via the resultList method.

DB.Todo.find().resultList(function(result) {
  result.forEach(function(todo) {
    console.log(todo.name); //'My first Todo', 'My second Todo', ...

You can also use the depth-parameter to query the entities to a specified depth just like for normal reads.

To find just the first matching object use the singleResult method.

DB.Todo.find().singleResult(function(todo) {
  console.log(todo.name); //'My first Todo'

Both resultList and singleResult support deep loading to also load references.

If you just need the number of matching objects, use the count method.

DB.Todo.find().count(function(count) {
  console.log(count); //'17'


Usually queries are employed to exert some kind of filter. The query builder supports lots of different filters, that can be applied on entity attributes. By default chained filters are and-combined.

  .matches('name', /^My Todo/)
  .equal('active', true)
  .lessThanOrEqualTo('activities.start', new Date())

The above query searches for all todos, whose name starts with 'My Todo', are currently active and contain an activity in its activities list that has been started before the current date.

Note that all valid MongoDB attribute expressions can be used as a field name in a filter, in particular path-expressions such as 'activities.start'.

If you are familiar with MongoDB queries, you can use the where method to describe a query in MongoDB's JSON format. An equivalent query to the above one would look like this:

    "name": { "$regex": "^My Todo" },
    "active": true,
    "activities.start": { "$lte": { "$date": new Date().toISOString() }}

The following table list all available query filters and the types on which they can be applied:

Filter method
MongoDB equivalent Supported types Notes
equal('name', 'My Todo')
$eq All types Complex types like embedded objects only match if their complete structure matches.
notEqual('name', 'My Todo')
$neq All types Complex types like embedded objects only match if their complete structure matches.
greaterThan('total', 3)
$gt Numbers, Dates, Strings gt() is an alias
greaterThanOrEqualTo('total', 3)
$gte Numbers, Dates, Strings ge() is an alias
lessThan('total', 3)
$lt Numbers, Dates, Strings lt() is an alias
lessThanOrEqualTo('total', 3)
$lte Numbers, Dates, Strings le() is an alias
between('total', 3, 5)
- Numbers, Dates, Strings It is equivalent to gt('total', 3).lt('total', 5)
in('total', 3, 5[,...])
$in All types For primitive fields any of the given values have to match the field value. On set and list fields at least one value must be contained in the collection in order for the filter to match.
notIn('total', 3, 5[,...])
$nin All types On primitive fields none of the given values must match the field value. On set and list fields none of the given values must to be contained in the collection in order for the filter to match.
- All types Checks if the field has no value; equivalent to equal('name', null)
$exists All types Checks if the field has a value; equivalent to where({'name': {"$exists" true, "$ne", null})
containsAny('activities', activity1, activity2 [,...])
$in List, Set, JsonArray Checks if the collection contains any of the given elements
containsAll('activities', activity1, activity2 [,...])
$all List, Set, JsonArray Checks if the collection contains all of the given elements
mod('total', 5, 3)
$mod Number The field value divided by divisor must be equal to the remainder
matches('name', /^My [eman]{4}/)
$regex String The regular expression must be anchored (starting with ^); ignore case and global flags are not supported.
size('activities', 3)
$size List, Set, JsonArray Matches if the collection has the specified size.
near('location', <geo point>, 1000)
$nearSphere GeoPoint The geo point field has to be within the maximum distance in meters to the given GeoPoint. Returns from nearest to furthest.
You need a Geospatial Index on this field, to use this kind of query. Read the query index section for more details.
withinPolygon('location', <geo point list>)
$geoWithin GeoPoint The geo point of the object has to be contained within the given polygon. You need a Geospatial Index on this field, to use this kind of query. Read the [query indexes](#query-indexes) section for more details.

You can get the current GeoPoint of the User with DB.GeoPoint.current(). This only works with an HTTPS connection.

References can and should be used in filters. Internally references are converted to ids and used for filtering. To get all Todos owned by the currently logged-in user, we can simply use the User instance in the query builder:

  .equal('owner', DB.User.me) //any other User reference is also valid here
Note: DB.user.me refers to the currently logged-in User instance. To learn more about users and the login process see the User, Roles and Permission chapter.


It is possible to sort the query result for one or more attributes. The query builder can be used to specify which attributes shall be used for sorting. Let's sort our query result by name:

  .matches('name', /^My Todo/)

If you use more than one sort criterion, the order of the result reflects the order in which the sort methods were called. The following query will list all active tasks before the inactive ones and sort the tasks by their name in ascending order.

  .matches('name', /^My Todo/)

When calling descending('active') before ascending('name') the result is sorted by name and then by active flag, which is only relevant for multiple todos having the same name.

You can also set the sort criteria with the MongoDB orderby syntax by using the sort() method. An equivalent expression to the above is this:

  .matches('name', /^My Todo/)
  .sort({"name": 1, "active": -1})

Offset and Limit

On larger data sets you usually don't want to load everything at once. Its often reasonable to instead page through the query results. It is therefore possible to skip objects and limit the result size.

var page = 3;
var resultsPerPage = 30;

  .matches('name', /^My Todo/)
  .offset((page - 1) * resultsPerPage)

Note: An offset query on large result sets yields poor query performance. Instead, consider using a filter and sort criteria to navigate through results.

For instance if you implement a simple pagination, you can sort by id and can get the data of the next page by a simple greaterThen filter. As the id always has an index this results in good performance regardless of the query result size.

var pageId = '00000-...';
var resultsPerPage = 30;

  .matches('name', /^My Todo/)
  .greaterThan('id', pageId)
  .ascending('id', pageId)
  .resultList(function(result) {
    pageId = result[result.length - 1];  

Composing Filters by and, or and nor

Filters are joined with and by default. In more complex cases you may want to formulate a query with one or more and, or or nor expressions. For such cases the initial find() call returns a Query.Builder instance. The builder provides additional methods to compose filter expressions.

The following query finds all todos which the logged-in user is not currently working on and all todos which aren't done yet:

var queryBuilder = DB.Todo.find();
var condition1 = queryBuilder
  .matches('name', /^My Todo/)
  .equal('active', false);

var condition2 = queryBuilder
  .matches('name', /^Your Todo/)
  .equal('done', false);

queryBuilder.or(condition1, condition2)

Query Indexes

Indexes on fields that are frequently queried can massively impact the overall query performance. Therefore our Dashboard provides a very comfortable way to create custom indexes on fields. It is always an tradeof on which fields you should create an index. A good index should be created on fields that contains many distinct values. But to many indexes on the same class can also reduce the write throughput. If you like to read more about indexes we currently use, visit the mongo indexes docs.

To create an Index open the schema view of the class and use the Index or Unique Index button to create an index. Currently we support three types of indexes:

Index: A simple index which contains a single field used to improve querys which filters the specified field.

Unique Index: A index that requires uniqueness of the field values. Inserting or updating objects that violates the unique constraint will be rejected with an ObjectExists error.

Geospatial Index: This index can be created on GeoPoint fields and is required for near and withinPolygon query filters. This Index is created on GeoPoint fields by using the Index Button.

Streaming Queries

Baqend does not only feature powerful queries, but also streaming result updates to keep your critical data up-to-date in the face of concurrent updates by other users.

Calling .stream() on a query object opens a websocket connection to Baqend, registers a streaming query and returns an event stream in form of an RxJS observable that provides you with updates to the query result as they happen over time.

var stream = DB.Todo.find().stream();

To make your code react to result set changes, you can subscribe to the stream and provide a function that is called for every incoming change event:

var subscription = stream.subscribe(event => console.log(event));

In order to activate streaming updates for a query, all you have to do is register it as a streaming query and provide a function to execute for every received change event:

var query = DB.Todo.find()
              .matches('name', /^My Todo/)
var subscription = query.stream()
              .subscribe(event => console.log(event));
new DB.Todo({name: 'My Todo XYZ'}).insert();//insert data
// The insert produces the following event:
//  "matchType":"add",
//  "operation":"insert",
//  "data":{"name":"do groceries",...},
//  "date":"2016-11-09T12:42:31.322Z",
//  "target":{...},
//  "initial":true,
//  "index":1
Note: You have to use the Baqend Streaming SDK to use the streaming query feature.

To stop receiving events from a streaming query, you can simply unsubscribe:

Note: Access rules for streaming queries are the same as for regular queries (see permissions). In other words, if your data would not be returned by a regular query, it won't be returned by streaming query, either.

Once subscribed to a stream, you will get an event for every database entity in the initial result set (i.e. every entity matching at subscription time) and for every entity that enters the result set, leaves the result set or is updated while in the result set.

Every event can carry the following information:


By default, you receive the initial result set and all events that are required to maintain it. However, the optional argument for the .stream([options]) function lets you restrict the kind of event notifications to receive by setting the appropriate attribute values:

Note: You can only restrict the event stream by either match types or operations, but not both.

Error Handling

On error, the subscription will automatically be canceled, but you can provide a custom error handler function that is executed whenever something goes wrong:

var onNext = event => console.log(event);
var onError = error => console.log(error);
var subscription = stream.subscribe(onNext, onError);
// A serverside error produces the following output:
//  "errorMessage":"Invalid query! Limit clause required for sorting query!",
//  "date":"2016-11-11T16:48:24.863Z",
//  "target":{"name":{"$regex":"^My Todo"}}

Every error event has the following attributes:

Streaming Simple Queries

Simple queries are queries that just return all entities in a collection, no filtering involved. While streaming simple queries can be very useful (for example to monitor all operations on the collection), they can produce vast amounts of events for collections that have many members or are updated very often. Therefore, you should be particularly careful to only subscribe to events you really want to be bothered with when using streaming simple queries.

For instance, if you are interested in all todo lists and only want to be notified as new lists are created, you could subscribe to the following stream:

var stream = DB.Todo.find().stream({operations: 'insert'});// initial result is delivered by default

If, on the other hand, you only care for the creation of new todo lists and not for the ones that are already in the database, you should not request the initial result set:

var stream = DB.Todo.find().stream({initial: false, operations: 'insert'});

Streaming Filter Queries

Like regular filter queries, streaming filter queries allow you to select entities based on their attribute values by applying filters.

You can, for instance, have the database send you an event for every todo list that is created with a name that matches a particular pattern:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .stream({initial: false, operations: 'insert'});

It is important to note, however, that the above query will only tell you when a new todo list matches your query on insert; it will not produce an event when an already-existing list is renamed to match your pattern, because that would happen by update (while the stream is targeting insert operations only).

If you are really looking for a streaming query that gives you new matches irrespective of the triggering operation, you should work with matchTypes and leave operations at the default:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .stream({initial: false, matchTypes: 'add'});// operations: ['any'] by default

To get the full picture, you can also request the initial result upfront. Initial matches are always delivered with match type add:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .stream({matchTypes: 'add'});// initial: true by default

Of course, you can combine several predicates using and, or and nor. The following query keeps you up-to-date on all todo lists that are active and match one pattern or have already been marked as done and match another pattern:

var queryBuilder = DB.Todo.find();
var condition1 = queryBuilder
  .matches('name', /^My Todo/)
  .equal('active', true);

var condition2 = queryBuilder
  .matches('name', /^Your Todo/)
  .equal('done', true);

var stream = queryBuilder
               .or(condition1, condition2)

Streaming Sorting Queries

All features described so far are also available for sorting queries, i.e. queries that contain limit, offset, ascending, descending or sort. Streaming sorting queries are great to maintain ordered results such as high-score rankings or prioritized todo lists.

The following maintains your top-20 todo lists, sorted by urgency, name and status:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)

Entities that sort identically are implicitly ordered by ID. Thus, a query without explicit ordering will result in more or less random order by default as IDs are generated randomly:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .limit(20)// no order provided? Implicitly ordered by ID!

The limit clause is mandatory and a query without limit will produce an error on subscription:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .stream()//no limit clause
               .subscribe(event => console.log('Next!'),
                 error => console.log('Error!'));

A streaming sorting query with offset maintains an ordered result, hiding the first few items from you and shaping events accordingly. Since the first index in a sorting query without offset is 0, events for the following subscription will never carry index values smaller than 10 or greater than 29:

var stream = DB.Todo.find()
               .matches('name', /^My Todo/)
               .offset(10)// skip the first 10 items

With respect to efficiency, the same rules apply to streaming and non-streaming queries: Sorting huge results is expensive and sorting queries should therefore be avoided when filter queries would do as well.

Note: Currently, streaming sorting queries are always executed as anonymous queries, i.e. they will only give you data that is publicly visible. To retrieve data protected by object ACLs, you have to either forgo streaming (use a plain sorting query) or ordering (use a streaming query without limit, offset, ascending and descending).

Example: Subscription and Events

For an example of how a streaming query behaves, consider the following example where two users are working concurrently on the same database. User 1 subscribes to a streaming sorting query and listens for the result and updates, whereas User 2 is working on the data.

Timestamp 0: User 1 and User 2 are connected to the same database.

Timestamp 1: User 2 inserts todo1:

var todo1 = new DB.Todo({name: 'My Todo 1'});

//actual result: [ todo1 ]

Timestamp 2: User 1 subscribes to a streaming query and immediately receives a match event for todo1:

var stream = DB.Todo.find()
    .matches('name', /^My Todo/)
subscription = stream.subscribe((event) => {
  console.log(event.matchType + '/'
    + event.operation + ': '
    + event.data.name + ' is now at index '
    + event.index);
// ... one round-trip later
//'add/none: My Todo 1 is now at index 0'

Timestamp 3: User 2 inserts todo2:

var todo2 = new DB.Todo({name: 'My Todo 2'});

//actual result: [ todo1, todo2 ]

Timestamp 4: User 1 receives a new event for todo2:

//'add/insert: My Todo 2 is now at index 1'

Timestamp 5: User 2: inserts todo3:

var todo3 = new DB.Todo({name: 'My Todo 3'});

//actual result: [ todo1, todo2, todo3 ]

Timestamp 6: User 1 receives a new event for todo3:

//'add/insert: My Todo 3 is now at index 2'

Timestamp 7: User 2 updates todo3 in such a way that its position in the ordered result changes:

todo3.name = 'My Todo 1b (former 3)';

//actual result: [ todo1, todo3, todo2 ]

Timestamp 8: User 1 is notified of this update through an event that delivers the new version of todo3. The fact that todo3 had already been a match and just changed its position in the result is encoded in the event's match type changeIndex:

//'changeIndex/update: My Todo 1b (former 3) is now at index 1'

Timestamp 9: User 2 inserts todo0 which sorts before all other items in the result and therefore is assigned index 0:

var todo0 = new DB.Todo({name: 'My Todo 0'});

//entities in DB: [ todo0, todo1, todo3 ], todo2
//                 <--- within limit --->

Because of the .limit(3) clause, only the first three of all four matching entities are valid matches and the last one — currently todo2 — is pushed beyond limit and therefore leaves the result.

Timestamp 10: User 1 receives two events that correspond to the two relevant changes to the result:

//'remove/none: My Todo 2 is now at index undefined'
//'add/insert: My Todo 0 is now at index 0'

Timestamp 11: User 2 updates todo3 again, so that it assumes its original name:

todo3.name = 'My Todo 3';

//entities in DB: [ todo0, todo1, todo2 ], todo3
//                 <--- within limit --->

Through this update, todo2 and todo3 swap places.

Timestamp 12: User 1 receives the corresponding events:

//'remove/update: My Todo 3 is now at index undefined'
//'add/none: My Todo 2 is now at index 2'

Timestamp 13: User 2 deletes todo3:


//entities in DB: [ todo0, todo1, todo2 ]

Note that the deleted entity was not part of the result set.

Timestamp 14: User 1 no match, because deleting todo3 had no effect on the query result.

//nothing happened

User 1 starts receiving the initial result directly after subscription (Timestamp 2). From this point on, any write operation performed by User 2 is forwarded to User 1 — as long as it's affecting the subscribed query's result. Changes to non-matching items have no effect in the eyes of User 1 (Timestamps 13/14).

Be aware that operation-related semantics are rather complex for sorting queries: For example, insert and update operations may trigger an item to leave the result (Timestamps 9/10 and 11/12). Similarly (even though not shown in the example), an add event can be triggered by a delete when an item enters the result set from beyond limit. When triggered by an operation on a different entity, an event may even be delivered with no operation at all (Timestamps 10 and 12).

Tip: Bottom line, be careful when filtering streaming sorting queries by operation!

Advanced Features: RxJS

The Baqend Streaming SDK is shipped with basic support for ES7 Observables, so that you can use it without requiring external dependencies. To leverage the full potential of Baqend's streaming query engine, though, we recommend using it in combination with the feature-rich RxJS client library.

In the following, we give you some references and a few examples of what you can do with RxJS and Baqend Streaming Queries.

RxJS: The ReactiveX JavaScript Client Library

Since the RxJS documentation is great and extensive, we do not go into detail on our client library, but rather provide a few references to get you started:

Maintaining Query Results

An obvious advantage of streaming queries over common non-streaming queries is the ability to keep your result up-to-date while you and other users are inserting, updating and deleting data.

For an example, imagine you and your colleagues are working on some projects and you are interested in the most urgent tasks to tackle. Your query could look something like this:

var query = DB.Todo.find()
              .matches('name', /^My Todo/)

When executed as a common non-streaming query, this will give you the current top-10 of the most urgent todos. However, as new tasks might come up and others might be ticked off by your colleagues, you have to evaluate the query again and again if you want to keep an eye on how things are going:

query.resultList(result => console.log(result));
//Did something change?
query.resultList(result => console.log(result));
//Let's check again...
query.resultList(result => console.log(result));

This pattern is inefficient and introduces staleness to your critical data.

With Baqend streaming queries, on the other hand, you can just have the database deliver the relevant changes and thus never miss a beat. The following code does not only retrieve an ordered result, but also maintains it:

var maintainResult = (result, event) => {
    if (event.matchType === 'add') {//new entity
      result.splice(event.index || 0, 0, event.data);
    } else if (event.matchType === 'remove') {//leaving entity
      for (var i = 0; i < result.length; i++) {
        if (result[i].id === event.data.id) {
          result.splice(i, 1);
    } else if (event.matchType === 'changeIndex') {//updated position
      var index = result.indexOf(event.data);
      result.splice(index, 1);
      result.splice(event.index, 0, event.data);
    return result;

var subscription = query.stream().scan(maintainResult, [])
                          .subscribe(result => console.log(result));

The scan operator can be used to maintain a data structure (the accumulator) by processing the incoming events. It takes two arguments: a function that is executed for every event (the maintenance function) and the initial value for the accumulator. Every invocation uses the accumulator value returned by the previous invocation. In this case, the accumulator is the query result and is initialized as an empty array ([]). The maintenance function maintainResult(result, event) takes the current result and the incoming event and returns the updated result.

Whenever there is a change in the top-10, the complete list will be printed to the console.

No need to refresh the result.

Real-Time Aggregations

Another neat use case for streaming queries is to compute and maintain aggregates in real-time. Similar to result set maintenance, the basic idea is to keep all relevant information in an accumulator and to recompute and output the updated aggregate value whenever an event is received.


One of the simpler aggregates over a collection of entities is the cardinality or count, i.e. the number of entities in the collection. The following code will compute and maintain the cardinality of the query result:

var maintainCardinality = (counter, event) => {
  if (event.matchType === 'add') {// entering item: count + 1
  } else if (event.matchType === 'remove') {// leaving item: count - 1
  return counter;

var subscription = stream.scan(maintainCardinality, 0)// update counter
                     .subscribe(value => console.log(value));// output counter

The current number of entities in the result set will be printed to the console whenever a change occurs.

Tip: Count maintenance is a good example where it makes sense to not subscribe to the default match types (['all']), because you are actually only interested in add and remove events: To restrict the events you will receive to those that really matter, register the streaming query with .stream({matchTypes: ['add', 'remove']}).


Now to a more complex example: Let's say you are interested in the average number of activities of each of the todo lists matching your query.

var initialAccumulator = {
  contributors: {},// individual activity counts go here
  count: 0,// result set cardinality
  sum: 0,// overall number of activities in the result
  average: 0// computed as: sum/count

The accumulator is not just an integer, but an object with several values: For maximum precision, we maintain the overall number of activities (sum) and result cardinality (count) separately and compute the average fresh on every event. We remember the number of activities for every individual entity in a map (contributors); this is necessary, because otherwise we would not have a clue by how much to decrement sum when an entity is updated or leaves the result set.

var maintainAverage = (accumulator, event) => {
  var newValue = event.matchType === 'remove' ? 0 : event.data.activities.length;
  var oldValue = accumulator.contributors[event.data.id] || 0;//default: 0

  if (newValue !== 0) {// remember new value
    accumulator.contributors[event.data.id] = newValue;
  } else {// forget old value
    delete accumulator.contributors[event.data.id];
  accumulator.sum += newValue - oldValue;
  accumulator.count += event.matchType === 'remove' ? -1 : event.matchType === 'add' ? 1 : 0;
  accumulator.average = accumulator.count > 0 ? accumulator.sum / accumulator.count : 0;
  return accumulator;

The maintenance function extracts the current number of activities (newValue) from the incoming event and the former value (oldValue) from the contributors map in the accumulator. Depending on whether the incoming entity contributes to the average or not, it either stores the new value in the map or removes the old value. Finally, sum and count are updated and the average is computed and stored as accumulator.average.

Since we are only interested in the average value, we add another step to extract it from the accumulator via the map operator:

var subscription = stream.scan(maintainAverage, initialAccumulator)//update counter
                           .map(accumulator => accumulator.average)//extract average
                           .subscribe(value => console.log(value));//output counter


Streaming is available for all queries with the following limitations:

Users, Roles and Permissions

Baqend comes with a powerful user, role and permission management. This includes a generic registration and login mechanism and allows restricting access to insert, load, update, delete and query operations through per-class and per-objects rules. These access control lists (ACLs) are expressed through allow and deny rules on users and roles.


To restrict access to a specific role or user, the user needs a user account. Baqend supports a simple registration process to create a new user account. The user class is a predefined class which will be instantiated during the registration process. A user object has a predefined username which uniquely identifies the user (usually an email address) and a password. The password will be hashed and salted by Baqend before being saved.

DB.User.register('john.doe@example.com', 'MySecretPassword').then(function() {
  //Hey we are logged in
  console.log(DB.User.me.username); //'john.doe@example.com'

If you like to set additional user attributes for the registration, you can alternatively create a new user instance and register the newly created instance with a password.

var user = new DB.User({
  'username': 'john.doe@example.com',
  'firstName': 'John',   
  'lastName': 'Doe',   
  'age': 33

DB.User.register(user, 'MySecretPassword').then(function() {
  //Hey we are logged in
  console.log(DB.User.me === user); //true

Email Verification

By default a newly registered user is automatically logged in and does not need to verify his email address. To enable email verification open the settings in the Baqend dashboard and go to the email section. There you can enable the email verification and setup a template for the verification email, which is then automatically send to every newly registered user.

Until the newly registered user has verified his email address by clicking on the verification link in the verification email, he is considered inactive and cannot log in. This state is indicated by a read only inactive field of type Boolean in the user object. After verification this field is automatically set to false. Only the admin is able to set the inactive field manually, e.g. to activate or ban users.

Login and Logout

When a user is already registered, he can login with the DB.User.login() method.

DB.User.login('john.doe@example.com', 'MySecretPassword').then(function() {
  //Hey we are logged in again
  console.log(DB.User.me.username); //'john.doe@example.com'  

After the successful login a session will be established and all further requests to Baqend are authenticated with the currently logged-in user.

Sessions in Baqend are stateless, that means there is no state attached to a session on the server side. When a session is started a session token with a specified lifetime is created to identify the user. This session is refreshed as long a the user is active. If this lifetime is exceeded, the session is closed automatically. A logout simply locally deletes the session token and removes the current DB.User.me object.

DB.User.logout().then(function() {
  //We are logged out again
  console.log(DB.User.me); //null
Note: There is no need to close the session on the server side or handle any session state like in a PHP application for example.
Tip: The maximum session lifetime is determined by the so called session longlife (default: 30 days). After this time the session expired and the user has to explicitly log in again. You can set the longlife in the settings of your Baqend dashboard.

New Passwords

Password can be changed by giving the old password and specifying the new one. Admin users can change the passwords of all users without giving the previous one:

//Using the user name
DB.User.newPassword("Username", "oldPassword", "newPassword").then(()=> {
    //New Password is set

//Using a user object
DB.User.me.newPassword("oldPassword", "newPassword").then(...);

//When logged in as an admin
DB.User.newPassword("Username", null, "newPassword").then(...);

Auto login

During initialization the Baqend SDK checks, if the user is already registered and has been logged in before in this session and has not logged out explicitly. As a consequence, returning users are automatically logged in and the DB.User.me object is set. New user are anonymous by default and no user object is associated with the DB.

DB.ready(function() {
  if (DB.User.me) {
    //do additional things if user is logged in
    console.log('Hello ' + DB.User.me.username); //the username of the user
  } else {
    //do additional things if user is not logged in
    console.log('Hello Anonymous');

Loading Users

User objects are private by default, i.e. only admins and the user itself can load or update the object. This behaviour is intended to protect sensitive user information. There are two ways to grant access to user objects:


The Role class is also a predefined class which has a name and a users collection. The users collection contains all the members of a role. A user has a specified role if he is included in the roles users list.

//create a new role
var role = new DB.Role({name: 'My First Group'});
//add current user as a member of the role
//allow the user to modify the role memberships
//this overwrites the default where everyone has write access

A role can be read and written by everyone by default. To protect the role so that no one else can add himself to the role we restrict write access to the current user. For more information about setting permissions see the setting object permissions chapter.

Predefined Roles

There are three predefined roles:

Predefined roles can be used just like normal roles. Typical use-case are that you define schema-level permissions to elevate rights of operations triggered by handlers and modules, allow certain things to logged-in users or restrict access to admins.

Note: The node role does not have any special privileges by default, but you can use it in ACLs to give it special rights.


There are two types of permissions: class-based and object-based. The class-based permissions can be set by privileged users on the Baqend dashboard or by manipulating the class metadata. The object-based permissions can be set by users which have write-access to an object. As shown in the image below the class-level permissions are checked first. If the requesting user has the right permission on class level, the object-level permissions are checked. Only if the requesting user also has the right permissions on object level, he is granted acces to the entity.

Each permission consists of one allow and one deny list. In the allow list user and roles can be white listed and in the deny list they can be black listed.

The access will be granted based on the following rules:

The following table shows the SDK methods and the related permissions the user has to have, to perform the specific operation.

Method Class-based permission Object-based permission
.load() type.loadPermission object.acl.read
.find() type.queryPermission object.acl.read
.insert() type.insertPermission -
.update() type.updatePermission object.acl.write
.delete() type.deletePermission object.acl.write
.save() type.insertPermission if the object is inserted
type.updatePermission if the object is updated
.save({force: true}) both type.insertPermission and type.updatePermission will be checked object.acl.write

Note: There is currently no way to check if a user has permissions to perform an operation without actually performing the operation.

Anonymous Users & Public Access

Anonymous users only have permissions to serve public resources. A resource is publicly accessible, if no class or object permission restricts the access to specific users or roles. To check if the object's permissions allow public access you can check the acl.isPublicReadAllowed() and the todo.acl.isPublicWriteAllowed() methods.

todo.acl.isPublicReadAllowed() //will return true by default
todo.acl.isPublicWriteAllowed() //will return true by default

Note: The access can still be restricted to specific roles or users by class-based permissions even if acl.isPublicReadAllowed() or todo.acl.isPublicWriteAllowed() returns true.

Setting Object Permissions

The object permissions are split up in read and write permissions. When inserting a new object, by default read and write access is granted to everyone. You can manipulate object permissions only if you have write permissions on the object. If you want to restrict write access to the current user but want to share an object within a group, you can add the role to the read permissions and the current user to the write permissions.

DB.Role.find().equal('name', 'My First Role').singleResult(function(role) {
  var todo = new DB.Todo({name: 'My first Todo'});

  return todo.save();

OAuth login

Another way to login or register is via a 'Sign in with' - 'Google' or 'Facebook' button. In general any OAuth provider can be used to authenticate and authorise a user. As of now, Baqend supports the main five providers.


To set them up, follow these steps:

Supported Providers

Provider Setup Notes
google docs Add as redirect URL:
facebook docs To set up Facebook-OAuth open the settings page of your Facebook app, switch to Advanced, activate Web OAuth Login and add
as Valid OAuth redirect URI.
github docs Add as redirect URL:
twitter docs Add as redirect URL:
https://[APP_NAME]-bq.global.ssl.fastly.net/v1/db/User/OAuth/twitter Twitter does not support E-Mail scope. In default case a uuid is set as username.
linkedin docs Add as redirect URL:

Login & Registration

In order to use an OAuth provider to register or login users, you call one of the following SDK methods, depending on the provider:

DB.User.loginWithGoogle(clientID, options).then(function(user) {
    //logged in successfully
    db.User.me == user;
// Same for

The login call returns a promise and opens a new window showing the provider-specific login page. The promise is resolved with the logged in user, once the login in the new window is completed. The OAuth login does not distinguish between registration and login, so you don't have to worry about whether a user is already registered or not.

In the options passed to the login you can configure the OAuth scope among others. The scope defines what data is shared by the OAuth provider. On registration the username is set to the email address if it's in the allowed scope. Otherwise a uuid is used.

Note: For the login to work despite popup blockers the call needs to be made on response to a user interaction, e.g. after a click on the sign-in button. Also, an OAuth login will be aborted after 5 minutes of inactivity. The timeout can be changed with the timeout option.

Customize Login & Registration

To customize the login and registration behavior you can simply create a Baqend module named oauth.[PROVIDER], which is called after the user is logged in (or registered). In this module you can access the logged in user and a data object containing the OAuth token as well as the user information shared by the OAuth provider. The token can be used to directly do further API calls or save the token for later use.

As an example, if you like to edit the OAuth login for google, create the Baqend module oauth.google. The module will be called after the user is successfully authorized:

exports.call = function(db, data, req) {
    db.User.me // the unresolved user object of the created or logged in user

    // data contains the profile data send by the OAuth provider
    data.id // The OAuth unique user id
    data.access_token // The OAuth users API token
    data.email // The users email if the required scope was requested by the client

The following table lists more information on what data can be shared by the OAuth providers:

Provider Profile documentation
google Just returns the email per default. Visit OAuth 2.0 Scopes for Google APIs for a complete list of supported scopes.
facebook Returns the content of the https://graph.facebook.com/v2.4/me resource
github Returns the authenticated user profile
twitter Just returns the access_token. An Email address can't be queried with the twitter API.
linkedin Returns the content of the https://api.linkedin.com/v1/people/~?format=json resource.

Note: The returned properties depend on the requested scope.

OAuth Login via Redirect

In some cases, it may be desirable to use the OAuth authorization without opening a new window, e.g. when cross-window communication is unavailable because of a missing localStorage object.

To use the login via redirect, you need to set a redirect parameter when calling the particular login method. In this case, the SDK does not return the user object, but creates a unique token and redirects to the specified redirect URL. Your site will be closed and the provider login will open instead.

//Set redirect parameter in loginOption
loginOption = {'redirect': 'http://.../yourRedirectPage'};

//call SDK method with loginOption
DB.User.loginWithGoogle(clientID, loginOption).then(function(user) {

After communicating with the OAuth provider, the unique token is sent as a query parameter to the specified redirect page. In case of a failure, the particular error message is sent instead. The following table lists more information of all possible query parameters:

Parameter Meaning
token A unique token to identify the user object (in case of success)
loginOption The specified login options (in case of success)
errorMessage A url-encoded error message (in case of failure)

In case of success, you can call the following SDK method with the unique token and the specified login options as parameters to login the user.

DB.User.loginWithToken(token, options).then(function(user) {
    //logged in successfully
    db.User.me == user;

The login call returns a promise which is resolved with the logged in user. The OAuth login does not distinguish between registration and login, so you don't have to worry about whether a user is already registered or not.

Note: For the login via redirect to work, ensure to register all valid redirect URLs (e.g. 'http://.../yourRedirectPage') in the "Authorized Domains" section of your dashboard settings.

Baqend Code

Baqend Code Handlers and Modules are JavaScript (Node.js) functions that can be defined in the dashboard and get evaluated on the server side. They come in handy when you need to enforce rules and cannot trust clients.


With handlers you are able to intercept and modify CRUD operations sent by clients. To register a handler, open the handler page of a class on the dashboard. There are four tabs, one for each of the three basic data manipulation operations and onValidate for easily validation. Each tab has an empty function template that will be called before executing the operation. Here you can perform secure validations or execute additional business logic.


onValidate gets called before an insert or update operation. It is a lightweight method to validate field values. The function is propagated into the client-side Baqend SDK and can be called on the client to validate inputs without rewriting the validation logic. The validation library validatorJs helps keeping validation simple and readable. The onValidate method gets a validator object for each field of the entity, which keeps all available validation methods.

function onValidate(username, email) {
 username.isLength(3, 15);
 //An error messages can be passed as first argument
 email.isEmail('The email is not valid') 

To validate objects on the client device call object.validate() in your application. It returns a result object containing the validation information.

user.username = "john.doe@example.com";
var result = user.validate();
if (result.isValid) {
  //true if all fields are valid

var emailResult = result.fields.email;
if (!emailResult.isValid) {
  //if the email is not valid, the errors can be retrieved from the error array
  console.log(emailResult.errors[0]) //'The email is not valid'

It is also possible to write custom validators. You can use the is validator the write custom validators:

function onValidate(password, passwordRepeat) {
 password.is('The passwords does not match', function(value) {
   return value != passwordRepeat.value; 
user.password = "mySecretPassword";
user.passwordRepeat = "mySecretPasswort"
var result = user.validate();

var passwordResult = result.fields.password;
if (!passwordResult.isValid) {
  //if the email is not valid, the errors can be retrieved from the error array
  console.log(passwordResult.errors[0]) //'The passwords does not match'

onInsert and onUpdate

If you need complex logic or your validation depends on other objects use the onUpdate and/or onInsert handler. The handler's this object as well as the second argument are the object which is inserted or updated. All attributes can be read and manipulated through normal property access. The requesting user can be retrieved through db.User.me. Inside Baqend Code the user is an unresolved object just like all other referenced objects. If you need to read or manipulate attributes, .load() the user first. Consider for example the case of maintaining the total time spent on a todo in a dedicated field (e.g. for sorting):

exports.onUpdate = function(db, obj) {
  if (obj.done) {
    //ensure that you always return promises of asynchronous calls, 
    //otherwise errors will not abort the update operation
    return db.User.me.load().then(function(user) {
      obj.activities.forEach(function(activity) {
        user.workingTime += activity.end.getTime() - activity.start.getTime();
      return user.save();

Since its possible to reactivate finished tasks, we might want to check if we need to decrease the counter. This is only necessary if the last status of the Todo object was done. To get the state of the object before the current update (before image) use db.load(objectID). obj.load() on the other hand would refresh the state of object currently under update to the previous state.

exports.onUpdate = function(db, obj) {
  return db.Todo.load(obj.id).then(function(oldTodo) {
    if (oldTodo.done != obj.done) {
      return db.User.me.load().then(function(user) {
        var totalTime = obj.activities.reduce(function(totalTime, activity) {
          return totalTime += activity.end.getTime() - activity.start.getTime();
        }, 0);

        if (obj.done) {
          user.workingTime += totalTime;
        } else {
          user.workingTime -= totalTime;

        return user.save();

It is also possible to change the actual object in the onInsert and onUpdate handler before it is saved. While issuing the insert/update from the SDK you will not get this changes back by default. To get the changed data back, use the refresh flag of the save(), insert() or update() method.

//the Baqend handler
exports.onUpdate = function(db, obj) {
//on client side without refresh
DB.Test.load('546c6-a...').then(function(obj) {
  return obj.save();
}).then(function(obj) {
  //obj.counter == 0

//on client side with refresh
DB.Test.load('546c6-a...').then(function(obj) {
  return obj.save({refresh: true});
}).then(function(obj) {
  //obj.counter == 1
Note: Inside Baqend Code data operations (e.g. user.save()) have the access rights of the user starting the request enhanced by an additional node role. Calls to Baqend originating from handlers will not trigger another onUpdate(db) call. See Predefined Roles for more details.


The onDelete handler is called with an empty object only containing the id of the deleted object. The method can for instance be used to log information or delete related objects.

exports.onDelete = function(db, obj) {
  obj.id //the id of the object which will be deleted
  obj.name //null

All four handlers are before-operation handlers. Be aware that they are called after the class level permissions are checked, but before object level permissions were validated. Thus, making changes to other objects inside handlers should be treated with care: these operations could succeed while the original operation might fail due to missing object access rights. An elegant way to simplify such cases is the use of after-handlers, one of our Upcoming Features.


Baqend Modules are JavaScript modules stored in Baqend. They can be called by clients and be imported by other modules and handlers. Only modules that export a call method can be called by clients directly. The Baqend module will get the DB object as the first, data send by the client as the second and the request object as the third parameter.

Let's create a simple invite system. To invite a user to an event, the invitation is added to this/her invite list. This process needs to be encapsulated in a Baqend modules as it requires write permissions on other users.

exports.call = function(db, data, req) {
  return db.User.find()
    .equal('username', data.username)
    .singleResult(function(user) {
      return user.save();

The body parameter passed into the function contains the request payload, i.e. the decoded query parameters of a GET request or the parsed body of a POST request.

On the client side we can now invite a user by its username to our event by invoking the Baqend invite method. Baqend modules can be invoked using get for reading data and with post to modify data.

DB.modules.post('invite', {email: 'peter@example.com', invite: 'My new event'})
  .then(function() {
    //invite was send successfully

Baqend modules are also useful for sending messages like E-mails, Push notifications and SMS.

Aborting requests

To abort an insert, update, delete or Baqend module invocation, handlers as well as modules may throw an Abort exception.

exports.onDelete = function(db, obj) {
  throw new Abort('Delete not allowed.', {id: obj.id});

The Abort exception aborts the request. The optional data parameter transfers additional JSON data back to the client. The data can be retrieved from the error object passed to the reject handler of the promise.

obj.delete().then(function() {
  //object was deleted successfully  
}, function(e) {
  e.message //The error message
  e.data.id //The data sent backed to the client

Advanced request handling

In addition to the simplified call(db, obj, req) method we provide an advanced way to handle requests within Baqend modules. You can implement GET and POST request handling separately by implementing a equivalent get(db, req, res) and post(db, req, res).

Note: that the second parameter is the request object and the third parameter is an express response object.

With the request object, you can handle form submissions via get or post

//Handle get submissions
exports.get = function(db, req, res) {
  //access url get parameters
  var myParam = req.query.myParam;


//Handle post submissions
exports.post = function(db, req, res) {
  //access form post parameters
  var myParam = req.body.myParam;


With the response object, you can send additional response headers and have a better control over the content which will be send back. You can use the complete express API to handle the actual request.

exports.get = function(db, req, res) {
  var myParam = req.query.myParam;

  if(db.User.me) {
    //we are logged in
    return db.User.me.load().then(function() {
      //use the powerful express helpers
        myParam: myParam, 
        token: sig(myParam, db.User.me),
        userId: db.User.me.id
  } else {
    //we are anonymous, lets redirect the user to a login page

It is important that you send the content back with one of the express res.send() helpers. Otherwise the response will not be send back to the client. In addition ensure that you return a promise when you make asynchronous calls within your Baqend module, otherwise the request will be aborted with an error!

Handling binary data

As a part of the advanced request handling, it is also possible to upload and download binary files in Baqend modules.

To send binary data to your Baqend module, you can specify the 'requestType' option. With the 'responseType' option you can receive binary data in the specified type from your Baqend module. This works similar to the file API and you can use all the listed file types as 'requestType' and 'responseType' too.

var svgBase64 = 'PHN2ZyB4bWxucz0...';
var mimeType = 'image/svg+xml';

return db.modules.post(bucket, svgBase64, {
  requestType: 'base64',    //Sending the file as a base64 string 
  mimeType: mimeType,       //Setting the mimeType as Content-Type
  responseType: 'data-url'  //Receiving the data as a data-url
}).then(function(result) {
  result // 'data:image/svg+xml;base64,PHN2ZyB4bWxucz0...'

To handle the binary files in a Baqend module, you must process the incoming raw stream directly. The incoming request object is a node.js Readable Stream and you will receive the incoming raw data as Buffer chunks.

To send binary data back to the client, you should set the Content-Type of the response data with the express res.type() method and send the data afterwards.

If you have completed the request handling you need to resolve the previously returned promise to signal the completion of the request handling.

//this simple Baqend handler just sends the uploaded file back to the client
exports.post = function(db, req, res) {
  return new Promise(function(success) {
    //node gives the file stream as chunks of Buffer 
    var chunks = []; 
    req.on('data', function(chunk) {
    req.on('end', function() {
      var requestData = Buffer.concat(chunks);
      // do something with the requestData
          .send(requestData); //sending some data back

Handling Files

In the Baqend Code you can use the same File API as from your client. For Baqend Code we, however, support two additional file content formats, namely stream and buffer.

With the stream format you can for example stream data through your Baqend Code into the database without buffering it, as the following example shows:

var http = require('https');

exports.call = function(db, data, req) {
  return new Promise((success, error) => {
    var httpReq = http.request({
      method: 'GET',
      hostname: data.host,
      path: data.path
    }, success);

    httpReq.on('error', error);
  }).then((stream) => {
    var file = new db.File({parent: '/www', name: data.name});
    var type = stream.headers['content-type'];
    var size = stream.headers['content-length'];
    return file.upload({data: stream, type: 'stream', mimeType: type, size: size});

This example shows a Baqend Module that sends an HTTP request (httpReq) to download whatever is referenced by the URL (data.host and data.path). We take the stream from this download and upload a file with this content into the /www root folder. This happens without buffering the downloaded data as it is streamed right through to the database.

Note: If you stream the file content to the server you always need to specify the file size as shown in the example.

Importing code and libraries

Baqend code constitutes CommonJS modules and can require other modules and external libraries.

Baqend modules not exposing a call method can't be called by the client but may be required by other modules and handlers.

exports.updateMe = function(db) {
  return db.User.me.load().then(function(user) {
    return user.save();

Baqend modules are imported through relative require calls and external libraries through absolute require calls.

//require another Baqend module
var myModule = require('./myModule');
//require an update (or insert, delete, validate) handler from 'MyClass'
var updateHandler = require('./MyClass/update');
//require the http core module for external http requests
var http = require('http');
exports.call = function(db, data, req) {
  return myModule.updateMe(db);

In Baqend Handlers modules are required from the parent folder.

//Require the module form the parent folder
var myModule = require('../myModule');       
exports.onUpdate = function(db, obj) {
  return myModule.updateMe(db);

The following additional libraries can always be required in Baqend code:

Note: If you need custom Node.js modules from npm, please contact us via support@baqend.com and we will add them.


Baqend Code is always executed with the permissions of the requesting client. If the requesting user is not logged in, all requests made from Baqend code are anonymous. Both anonymous and authenticated invocations are enhanced by the node role. This predefined role can be used in class and object ACLs to grant Baqend code additional access rights. In addition there are some Baqend API resources which can only be accessed by the admin or the node role.

Push Notifications

Baqend provides the ability to send push notifications to end users devices. Before you can send a push notification you must first register the Device of the User. Registered devices can then later be used in Baqend Code to send push notifications to.

Note: Currently Baqend supports IOS and Android devices, support for more platforms are planed.

Setup Push

Apple Push Notifcation Service (APNS)

To enable push notifications for iOS devices you have to upload your production or sandbox certificate in the Baqend settings view of your app. Please upload your certificate as a p12-file without any password protection. Otherwise it's not possible for Baqend to use it.

The sandbox certificate is needed, when testing the app directly from Xcode. If the app has been published to the app store or should be tested in TestFlight, you must upload your production certificate. It's currently not possible to use both certificate types at the same time.

This tutorial show hows to enabled push notification in your app and how to export your certificate as a p12-file.

Google Cloud Messaging (GCM)

To enabled push notifications for Android devices Baqend needs your GCM API key. The key can be saved in the Baqend settings view of your app.

To get your API key browse to the Google Developers Console, open Enable and manage APIs, create or chose your app, click on Credentials on the left side. If you already created an server key, copy it from the list and save it at the Baqend settings view of your app, otherwise click on Create credentials -> API key -> Server key to create a new api key. It's important, that the field Accept requests from these server IP addresses is empty.

In your app itself you have to use the sender ID and not the server API key. The sender ID is called project number in the Google Developers Console.

Device registration

A registered device is represented in Baqend by the Device class. The Device class contains the deviceOs field which contains the platform name of the registered device, currently Android or IOS. To register a new device you must first obtain a device token with your used mobile framework. With the token you can register the device on Baqend.

It is not required to register a Device every time your App initialize. The SDK provides you a flag, that indicates if the Device is already registered. Therefore you must only request a device token if the device is currently not registered:

DB.ready().then(function() {
    if (!DB.Device.isRegistered) {
        //helper method which fetch a new device token, using your favor framework 
        var deviceToken = requestDeviceToken();

        DB.Device.register('IOS', deviceToken);

The device class can be extended with custom fields like any other class in Baqend. This allows you to save additional data with your device, which you can later use to query the devices that should receive a push notification. To persist additional data with your device while registering it, you can pass a Device object to the registration method.

A common use case is to save the user with a device, that allows you to send a push notification to the users device later on.

var device = new DB.Device({
    "user": DB.User.me

DB.Device.register('IOS', deviceToken, device);


To send a push notification the SDK provides a PushMessage class which can be used to send a message to one or more devices. In addition to the message itself a PushMessage can transport additional information to the end users device.

Name Type Notes
message String The optional message to display
subject String The headline of the push message
sound String sound The filename of the sound file. The device uses this file as the notification sound.
badge Number The badge count, displayed on the apps icon, only supported by IOS
data Object Additional json data send directly to your app

Sending push

Push notifications can only be send within Baqend code. To send a push notification to one or more devices, you must first obtain the desired device ids. Therefore you can use the additional data stored in the device object to query those, or can save the device reference in another object.

 * The Baqend code sends a push notification to the given list of users.
 * Therefore the extended device class contains a user field.
 * @param {Array<String>} data.users A list of user ids
 * @param {String} data.message The message to push
exports.call = function(db, data) {
  var users = data.users;
  var message = data.message;
  var subject = data.subject;

  return db.Device.find()
    .in('user', users)
    .then(function(devices) {
      var pushMessage = db.Device.PushMessage(devices, message, subject);
      return db.Device.push(pushMessage);


The Baqend SDK internally tracks the state of all living entity instances and their attributes. If an attribute of an entity is changed, the entity will be marked as dirty. Only dirty entities will be send back to the Baqend while calling save() or update(). Also the collections and embedded objects of an entity will be tracked the same way and mark the owning entity as dirty on modifications. The big advantage of this dirty tracking is that when you apply deep saving to persist object graphs, only those objects that were actually changed are transferred. This saves performance and bandwidth.

DB.Todo.load('Todo1').then(function(todo) {
  todo.save(); //will not perform a Baqend request since the object is not dirty   

Deep Loading

As described in the References chapter, references between entities will be handled differently from embedded objects or collections. The referenced objects will not be loaded with the referencing entity by default.

//while loading the todo, the reference will be resolved to the referenced entity
DB.Todo.load('7b2c...').then(function(firstTodo) {
  console.log(firstTodo.name); //'My first Todo'
  console.log(firstTodo.doNext.name); //will throw an object not available error

In a more complex scenario you may have references in a collection. These references won't be be loaded by default neither.

DB.Todo.load('7b2c...').then(function(firstTodo) {  
  //will throw an object not available error

To load dependant objects, you can pass the depth option while loading the entity. The depth option allows to set a reference-depth which will automatically be loaded. A depth value of 0 (the default) just loads the entity.

DB.Todo.load('7b2c...', {depth: 0}).then(function(firstTodo) {   
  //will throw an object not available error
  //will still throw an object not available error

A depth value of 1 loads the entity and one additional level of references. This also includes references in collections and embedded objects.

DB.Todo.load('7b2c...', {depth: 1}).then(function(firstTodo) {
  console.log(firstTodo.doNext.name); //'My second Todo'
  console.log(firstTodo.upComingTodos[0].name); //'My second Todo'  
  //will throw an object not available error
  //will still throw an object not available error

Setting the depth value to 2 resolves the next level of references and so on. You can set the depth option to true to load all references by reachability. But be aware of that is dangerous for large object graphs.

Deep Loading with Queries

Deep loading also works for query results obtained via resultList and singleResult:

DB.Todo.find().resultList({depth: 1}, function(result) {
  result.forEach(function(todo) {

In that case all referenced objects in all objects loaded by the query are fetched, too.

Cached Loads

Each EntityManager instance has an instance cache. This instance cache is used while loading objects and resolving references. When an entity is loaded it is stored into this instance cache and will always be returned when the same instance is requested. This ensures that you will always get the same instance for a given object id. That means object equality is always guaranteed for objects having the same ids.

DB.Todo.load('MyFirstTodo', {depth: 1}).then(function(firstTodo) {
  DB.Todo.load('MySecondTodo').then(function(secondTodo) {
    //true, object equality is guaranteed by the DB instance cache
    console.log(firstTodo.doNext == secondTodo); 

Deep Saving

As with deep loading, you can also save referenced entities with the referencing entity by passing the depth option. If you call save() without any options, only the entity itself will be saved, but not any referenced entity. This is the same behaviour as passing depth with the value 0.

var firstTodo = new DB.Todo({name: 'My first Todo'});
var secondTodo = new DB.Todo({name: 'My second Todo'});

firstTodo.doNext = secondTodo;
firstTodo.save(); //will save firstTodo, but not the secondTodo

By passing the depth option with a value of 1 the entity and all its direct referenced entities will be saved.

var thirdTodo = new DB.Todo({name: 'My third Todo'});

firstTodo.doNext = secondTodo;
secondTodo.doNext = thirdTodo;
//will save firstTodo and secondTodo, but not the thirdTodo
firstTodo.save({depth: 1});

And again increasing the depth value to 2 will save all direct referenced entities and all entities which are referenced by those referenced entities. You can also pass depth with true to save all dirty entities by reachability.


With the hosting feature you can serve your website (html, css, js, images) right from your Baqend cloud instance while using your own domain.

Public File Access

All assets stored in the www root folder can be accessed under your app domain (<appName>.app.baqend.com) as in the following examples:

Folder (parent) File Name (name) Public Url
www index.html <appName>.app.baqend.com/
www about.html <appName>.app.baqend.com/about.html
www/images logo.jpg <appName>.app.baqend.com/images/logo.jpg

Tip: Baqend hosting works great with static site generators like Jekyll, Hugo, Octopress or Hexo. You can start completely static or even import data from CMS like Wordpres. Later you can gradually add dynamic parts using the Baqend SDK. From the first static blog post to a highly dynamic site, everything will be cached and accelerated by Baqend.


To deploy your assets you can either use the file explorer in the Baqend dashboard (e.g. drag-and-drop files and folders) or for an easy, automated deployment user the Baqend CLI.

Custom Domains

To serve your website under your own domain you have to create a dns entry and register the custom domain in your Baqend dashboard:

  1. Log into the account at your domain provider and add a CNAME rule like the following to your DNS entries:

    www.yourdomain.com. IN CNAME global.prod.fastly.net.

    Note: You should not use a top level domain as a CNAME, since many DNS providers do not support it. Instead use a sub domain such as www.yourdomain.com. In addition you should ensure that no other DNS-entry is set for the used domain.

  2. Log into your Baqend dashboard and open your app settings. In the Hosting section simply add your custom domain www.yourdomain.com and click the save button. Your domain will now be registered at the CDN. Instead of <appName>.app.baqend.com you can now use www.yourdomain.com.

Consult your DNS provider's instructions to configure the CNAME record for your domain name. The steps to add a CNAME record will vary for each registrar's control panel interface.

If you cannot find your provider's CNAME configuration instructions, Google maintains instructions for most major providers.

Note: The registration of your domain as well as your dns-entry can take a few minutes until they are accessable. If you have trouble configuring your CNAME records, contact us at support@baqend.com.
Note: To register an apex/naked-domain (such as exmaple.com, without www) you still need to establish the link to global.prod.fastly.net. (read here why this is a problem). Some domain providers have solutions for that (like AWS). A workaround could be to redirect to your www domain at your domain provider or using a service like this. If you're unable to find a solution for your provider contact us at support@baqend.com.

Single Page Apps

History API

If you use the History API of your single page app framework (like Angular2 or React), you need to host your index.html also as 404.html. This leaves you with the two identical files:

Folder (parent) File Name (name) Public Url
www index.html <appName>.app.baqend.com/
www 404.html Every URL where no file is hosted
This is in order to make sure that ever entrypoint into the app uses the code from your index.html.

If a user for example directly opens a URL like http://yourapp.com/products/42 this request needs to be handled by the single page app because there is no hosted HTML file under /www/products/42.html. The 404.html is returned whenever no hosted file is found for a URL (like http://yourapp.com/products/42). By hosting the same code in both your index.html and 404.html all entrypoints will be correctly handled.

SSL Hosting

All data accessed over the Baqend SDK is SSL encrypted by enforcing encryption at connect. If you need SSL encryption for your hosted assets too please contact us (support@baqend.com), as this feature is not automated yet.


Baqend comes with a powerful File and Asset API. You can create multiple root level folders and apply different permissions to those. Files can be uploaded, replaced, downloaded and deleted when the user has the right permissions.

In addition the SDK comes with a rich set of functionality to transform the file contents to different browser friendly formats. In the following table we list all supported file formats:

type JavaScript type Description
'arraybuffer' ArrayBuffer The content is represented as a fixed-length raw binary data buffer
var buffer = new ArrayBuffer(8)
'blob' Blob|File The content is represented as a simple blob
var blob = new Blob(["<a href=..."], {type : 'text/html'})
Note: This does not work in Baqend code
'json' object|array|string The file content is represented as json
var json = {prop: "value"}
'text' string The file content is represented through the string
'A Simple Text'
'base64' string The file content as base64 encoded string
'data-url' string A data url which represents the file content
'stream' Stream A stream containing the file content
See our example.
Note: This only works in Baqend code.
'buffer' Buffer A buffer containing the file content
'var buffer = Buffer.from(array)'
Note: This only works in Baqend code

The file API accepts all the listed formats as upload type and transforms the content to the correct binary representation while uploading it. The SDK guesses the correct type except for the base64 type and transforms it automatically.

When you download a file you can specify in which format the downloaded content should be provided.

Accessing Files

The simplest way to access a file is to retrieve the absolute url form the Baqend SDK. Therefore you can use any existing file reference or you can create one by yourself.

The are multiple ways to reference a file:

// Absolute references have to start with '/file' followed by a root folder e.g. '/www'
var file = new DB.File('/file/www/myPic.jpg');
// Alternatively you can give the path of the file, starting with the root folder
var file = new DB.File({path: '/www/myPic.jpg'});
// Or you specify the name and parent (folder) of the file
var file = new DB.File({parent: '/www', name: 'myPic.jpg'});
// Because '/www' is the default parent in can be omitted
var file = new DB.File({name: 'myPic.jpg'});

To get the full url to access the file just use the file.url shorthand. This ensures that the domain is correctly used, checks if the file is stale or can be directly served form the cache and attach authorization credentials if needed.

In a common html template engine you can just write:

<img src="{{file.url}}">

You can also manage your files in folders for example like this:

//creates the same file reference
var file = new DB.File('/file/www/images/myPic.jpg');
//parent start with the root folder, e.g. /www and followed by additional folders
var file = new DB.File({parent: '/www/images', name: 'myPic.jpg'});
Note: Parent paths always start with a root folder, since the access control (who can access and modify the folder contents) can only be set for the root folder and is applied to all nested files and folders.

Embedded Files

Files can also be embedded in other objects like for example a profile image in a user object (see primitive types):

db.User.me.load().then(function(user) {
    var file = user.profileImage;
    console.log(file.url); // The file url, e.g. 'http://app.baqend.com/v1/file/users/img/myImg.png'


Suppose you have an uploaded file /www/images/myPic.jpg and a reference to it. Then you can use the load method to get additional file metadata (not the content itself):

var file = new DB.File('/file/www/images/myPic.jpg');
file.load(function() {
    file.isMetadataLoaded // > true
    file.lastModified // > The time of the last update
    file.size // > Filesize in byte

Listing Files

You can also list all files inside a folder. Either provide the path to the folder as string or a file reference representing the folder

var folder = new DB.File('/file/www/images/');
DB.File.listFiles(folder).then(function(files) {
    // all the files in the folder '/www/images/'

Note: If you have many files in a folder, you should always specify a limit on how many files are returned. See the SDK documentation for details.

You can also list all root folders:

DB.File.listBuckets().then(function(rootFolders) {
    // all root folders

Uploading Files

To upload a file you must first create a file with its name and its content. Afterwards you can simply upload the file by just invoking upload():

var file = new DB.File({name: 'test.png', data: file, type: 'blob'})
file.upload().then(function(file) {
    //upload succeed successfully 
    file.mimeType //contains the media type of the file
    file.lastModified //the upload date
    file.eTag //the eTag of the file
}, function(error) {
    //upload failed with an error 

In most cases you would like to upload the files which was provided by your user through a file input field or a file drag & drop event.

<input type="file" id="input" multiple onchange="uploadFiles(this.files)">
function uploadFiles(files) {
  var pendingUploads = [];

  for (var i = 0, numFiles = files.length; i < numFiles; i++) {
    //If you omit the name parameter, the name of the provided file object is used
    var file = new DB.File({data: files[i]});

  Promise.all(pendingUploads).then(function() {
    //all files are successfully uploaded

In the cases you want to upload base64 encoded binary data you can use the base64 type in the options object:

var file = new DB.File({name: 'test.png', data: 'R0lGODlhDAAeALMAAG...', type: 'base64', mimeType: 'image/gif'})
file.upload().then(function(file) {
    //upload succeed successfully 
    file.mimeType //contains the media type of the file
    file.lastModified //the upload date
    file.eTag //the eTag of the file
}, function(error) {
    //upload failed with an error 

If you try to overwrite an existing file and do not have previously fetched the file or its metadata, or the file has been changed in the meantime the upload will be rejected to prevent accidental file replacement. If you like to skip the verification, you can pass the {force: true} option to the upload() call.

Note: To upload a file you must have at least the insert or update permission on the root folder and write access on the file.

Downloading Files

Downloading a file works similar to uploading one. Just create a file reference and call file.download():

var file = new DB.File({name: 'myPic.jpg'});
file.download(function(data) {
    data //is provided as Blob per default

    //accessing the metadata of the file 
    file.mimeType //contains the media type of the file
    file.lastModified //the upload date
    file.eTag //the eTag of the file

To load the file content in a different format, just request a download type

var file = new DB.File({name: 'myPic.jpg', type: 'data-url'});
file.download(function(data) {
    //data is a data url string
    data // "data:image/jpeg;base64,R0lGODlhDAA..."
Note: To download a file you must have at least the load on the root folder and read access on the file.

Deleting Files

To delete a file just call the delete() method after creating the file reference:

var file = new DB.File({name: 'test.png'})
file.delete().then(function(file) {
    //deletion succeed
}, function(error) {
    //upload failed with an error 

If you try to delete a file and you have previously fetched the file or its metadata and the file has been changed in the meantime the deletion will be rejected to prevent accidental file deletions. If you like to skip the verification, you can pass the {force: true} option to the delete() call.

File ACLs

The File Permissions works similar to the object acls, you can define permissions on the root folders similar to class-based permissions and file permissions similar to object level permissions.

The root folder permissions are applied to all nesting folders and files.

File Permissions

The following table gives an overview of the required permissions per operation:

Method Root-folder-based permission File-based permission
.download(), .url folder.load object.acl.read
.upload(<new file>) folder.insert -
.upload(<existing file>) folder.update object.acl.write
.upload({force: true}) both folder.insert and folder.update will be checked object.acl.write
.delete() folder.delete object.acl.write

Set root folder Permissions

Per default only the admin can access root folders with one exception. The www folder is public readable for the file hosting feature of Baqend.

To change the permissions for a specific root folder yous should commonly use the Baqend Dashboard. But if you like to change the permissions programmatically you can use the saveMetadata() method:

//grant full access on the pictures root folder for the current user
DB.File.saveMetadata('pictures', {
   load: new DB.util.Permission().allowAccess(db.User.me),
   insert: new DB.util.Permission().allowAccess(db.User.me),
   update: new DB.util.Permission().allowAccess(db.User.me),
   delete: new DB.util.Permission().allowAccess(db.User.me),
   query: new DB.util.Permission().allowAccess(db.User.me)
Note: To actually change the permissions of a root folder, you must own the admin role or you code must be executed as Baqend code.

Set file Permissions

The file permissions can be set when a file is uploaded. Therefore you can pass the acl option to the File constructor or to the upload method.

var file = new DB.File({
    name: 'test.png', 
    data: file, 
    acl: new DB.Acl()


How Baqends Caching works

Baqend uses a combination of CDN and client caching using a Bloom filter-based data structures called Cache-Sketch. This enables Baqend-based applications to use not only CDN caches but also expiration-based caches  —  in most cases the browser cache  —  to cache any dynamic data.

Caching everything, not just assets

The tricky thing when using such caches is that you must specify a cache lifetime (TTL) when you first deliver the data from the server. After that you do not have any chance to kick the data out. It will be served by the browser cache up to the moment the TTL expires. For static assets it is not such a complex thing, since they usually only change when you deploy a new version of your web application. Therefore, you can use cool tools like gulp-rev-all andgrunt-filerev to hash the assets. By renaming the assets at deployment time you ensure that all users will see the latest version of your page while using caches at their best

But wait! What do you do with all the data which is loaded and changed by your application at runtime? Changing user profiles, updating a post or adding a new comment are seemingly impossible to combine with the browsers cache, since you cannot estimate when such updates will happen in the future. Therefore, caching is just disabled or very low TTLs are used.

Baqend’s Cache-Sketch

We have researched and developed a solution where we can check the staleness of any data before we actually fetch them. At the begin of each user session the connectcall fetches a very small data structure called a Bloom filter, which is a highly compressed representation of a set. Before making a request, the SDK first checks this set to know if it contains an entry for the resource we fetch. An entry in the set indicates that the content was changed in the near past and that the content may be stale. In such cases the SDK bypasses the browser cache and fetches the content from the nearest CDN edge server. In all other cases the content is served directly from the browsers cache. Using the browser cache saves network traffic, bandwidth and is rocket-fast.

In addition, we ensure that the CDN always contains the most recent data, by instantly purging data when it becomes stale.

The Bloom filter is a probabilistic data structure with a tunable false positive rate, which means that the set may Indicate containment for objects which were never added. This is not a huge problem since it just means the we first revalidate the freshness of an object before we serve it from the browsers cache. Note that the false positive rate is very low and it is what enables us to make the footprint of the set very small. For an example we just need 11Kbyte to store 20,000 distinct updates.

There is lot of stream processing (query match detection), machine learning (optimal TTL estimation) and distributed coordination (scalable Bloom filter maintenance) happening at the server side. If you’re interested in the nitty-gritty details have a look at this paper or these slides for a deep-dive.

Note: Caching is active for all CRUD operations by default. Query Caching is currently in beta, if you would like to test it please contact support@baqend.com.

Configuring Freshness

Any new page load will always return the newest data (a fresh Bloom filter is fetched). While the app is running you can allow a configurable maximum staleness to make optimal use of the browser cache. This does not mean that you will actually see any outdated content, it just provides you with an upper bound that is never exceeded.

There are two settings affecting Bloom filter freshness that can be configured in the dashboard:

Tip: You can increase both staleness settings, if your application is under very heavy load. This saves you requests and prevents scalability bottlenecks if you are in the free tier.

If you want to override the total staleness in individual clients, you can set it manually:

    staleness : 10

For individual operations you can optionally bypass the cache to get strong consistency (linearizability):

//To get the newest version via the id
var todo = DB.Todo.load("myTodo", {refresh : true });

//To update a known instance to the latest version
todo.load({refresh : true });

Local Objects

You can request already loaded objects using the local flag. It will try to give you an instance you have already loaded and only load it, if it's not present:

//If we have seen "myTodo" in the code we will get exactly that instance
DB.Todo.load("myTodo", {local : true }).then(...);

//local is the default for the instance method

//This is also useful to see your own unsaved changes, irrespective of updates from other users
todo.done = true;
DB.Todo.load("myTodo", {local : true }).then(function() {
    console.log(todo.done); // true


As required by many apps, we provide an easy to use logging API to log data out of your app. Additionally the Baqend dashboard shows access logs which contain all the resources requested by your users.

App logs and Access logs are accessible through the Baqend dashboard and kept for 30 days. In addition you can view, query and manage the permissions of the logs like any other data you persist to Baqend. But you can't modify the schema, the logged data nor the permissions of insert, update and delete operations.

Note: When querying logs you must always use a date predicate, otherwise you will only get the last 5 minutes of the logs.

App logging

The Baqend SDK provides a simple logging API which you can use in your app as well as in Baqend code.

The SDK provides a simple log method which takes a log level, a message, arguments and a optional data object. In addition the SDK logs the current date and the logged in user.

Log Levels

You can use multiple log levels to categorize your logs. You can use one of the predefined logging levels trace, debug, info, warn, error. Log levels can later be used to filter logs.

DB.log('debug', 'A simple debug message');

If you do not provide a log level, the log level becomes info.

For easier usage the log method also expose additional log methods for each log level:

DB.log.trace('A simple trace message');
DB.log.debug('A simple debug message');
DB.log.info('A simple info message');
DB.log.warn('A simple warn message');
DB.log.error('A simple error message');

By default only error, warn and info logs are activated. If you want to use debug or trace logs or maybe deactivate one of the other levels you can specify the minimum log level to track like this:

DB.log.level = 'debug'; // to track all logs except for 'trace'
DB.log.level = 'warn'; // to track only 'warn' and 'error'

Log Arguments

It is easy to include dynamic data into the log message. You can use placeholder in your log message which will be replaced by the additional passed values. The can use the placeholders %s for strings, %d for numbers and %j for a json conversion before the values are included into the log message.

DB.log('debug', 'The value %d is greater then %d', 10, 5);
//logs the message 'The value 10 is greater then 5'

Often you want to log additional data, which should not be converted to a string and included into the log message itself. All the log methods allows one additional argument as the last argument. The argument should be a json like object and will be logged in addition to the log message.

DB.log('debug', 'The value %d is greater then %d', 10, 5, {val1: 10, val2: 5});
//logs the message 'The value 10 is greater then 5'
//and the data {val1: 10, val2: 5}

You can also use the log level helper methods:

DB.log.debug('The value %d is greater then %d', 10, 5, {val1: 10, val2: 5});
Note: App logs can be inserted by everyone by default, to restrict log insertion you can change the insert permission of the AppLog class in the dashboard.

Access logs

Access logs will be automatically collected whenever a resource of your app is accessed through a fastly server.

The following data will be collected by us: