QuestDB 1.0.x Documentation

Getting Started

QuestDB redefines the standard of database performance. We are a relational database with native support for time series data. Our stack has been built from scratch, free from third-party dependencies, and optimised for performance and scalability. Whatever the device, QuestDB will automatically use your hardware to its maximum, and run fast.

Interaction can be done via REST API, and via an intuitive web console for instantaneous usage.

It comes with a default configuration that can be used straight out of the box.

Follow these simple steps to get started:

Supported platforms

We support the following platforms:

  • Mac OS X 64-bit
  • Windows 64-bit
  • Linix 64-bit

We support the following SDK:

  • Java 8 SDK

JRE and 32-bit versions are not supported.

We recommend to have Java installed and JAVA_HOME environment variable setup. Both Windows and Linux wrappers for QuestDB will require JAVA_HOME. OSX wrapper will figure out Java location automatically.

Installing QuestDB

For the packaged versions, please follow these steps:

If you would like to test the latest features, you can also follow these steps to run the latest source code:

  • For source code here


Download tar.gz and extract its contents in a folder of your choice. c:\ usually works.


Download tar.gz via browser or from terminal:

[email protected]:~$ curl -O
 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100 2903k  100 2903k    0     0  3362k      0 --:--:-- --:--:-- --:--:-- 3360k

Then extract it as follows:

[email protected]:~$ tar xvfz questdb-1.0.2-bin.tar.gz 


OSX installation is made simple by Homebrew. Make sure you have it installed before continuing.

To install QuestDB run:

mbp:~ user$ brew install questdb
==> Downloading
==> Caveats
To have launchd start questdb now and restart at login:
  brew services start questdb
Or, if you don't want/need a background service you can just run:
  questdb start
==> Summary
🍺  /usr/local/Cellar/questdb/1.0.0: 7 files, 3.0M, built in 0 seconds

Running QuestDB

To run QuestDB, please follow the instructions for your specific OS

To run from the source code, check here


To start:

Launch cmd.exe:


Change directory to where you extracted tar.gz :

C:\>cd questdb-1.0.2

Volume in drive C has no label.
Volume Serial Number is 9CD8-DB1D

Directory of C:\questdb-1.0.2

26/10/2016  19:33    <DIR>          .
26/10/2016  19:33    <DIR>          ..
02/08/2016  22:04            35,179 LICENSE.txt
26/10/2016  19:33    <DIR>          qdbroot
26/10/2016  13:32            75,322 questdb.exe
26/10/2016  16:10         3,107,733 questdb.jar
19/10/2016  20:44             5,206
              4 File(s)      3,223,440 bytes
              3 Dir(s)  796,188,151,808 bytes free

Then run the launcher as follows



QuestDB HTTP Server 1.0.2
Listening on [HTTP plain]
To run as a service:

Run questdb.exe as Administrator.

To stop:

Simply press Ctrl+C in cmd window.

When run from console QuestDB server home is qdbroot in current directory.

Additional launcher commands and options:

Windows launches supports the following commands and options:

Usage: C:\questdb-1.0.2\questdb.exe 
[start|stop|status|install|remove] [-d dir] [-f] [-j JAVA_HOME] [-t tag]
Command Comments
start Starts windows service. Default service name is QuestDB
stop Stops windows service
status Shows service status. This command is useful for troubleshooting service problems. It prints RUNNING or INACTIVE if service is started or stopped respectively.
install Installs windows service. Default name is QuestDB. Windows service names have to be unique. If you would like to run multiple instances of QuestDB you have to use -t option.
remove Removes windows service
Option Comments
-j Path to Java SDK directory. By default QuestDB uses value of JAVA_HOME environment variable.
-d Path to QuestDB home directory
-t Service name suffix tag. -t X will create QuestDB:X service name. This option can be used with all commands.


To start:

The launch script is The questdb.exe is a part of multi-platform package, you can ignore or delete it. Start QuestDB as follows:

[email protected]:~$ cd questdb-1.0.2/
[email protected]:~/questdb-1.0.2$ ./ start


Listening on [HTTP plain]

QuestDB server will run in background and it will continue running even if you close terminal session.

To stop:
[email protected]:~/questdb-1.0.2$ ./ stop


Stopped 4631

By default QuestDB home directory will be $HOME/.questdb. You can change this location with -d command line switch.


To start:
mbp:~ user$ questdb start


QuestDB HTTP Server 1.0
Listening on [HTTP plain]
To stop:
mpb:~ user$ questdb stop


Stopped 82395
To start with a different home directory:

QuestDB default home directory is /usr/local/var/questdb. To change home directory start QuestDB with -d switch:

mbp:~ user$ questdb start -d $HOME/.questdb


QuestDB HTTP Server 1.0
Listening on [HTTP plain]

Running From Source

QuestDB is in constant development, and the packaged versions can get behind in terms of features. Explore the latest features by running straight from the source code!


You will need:

  • JDK (download here).
  1. Find version 8u212, for your corresponding machine (Linux, MacOS, Windows)
  2. Accept the licence agreement at the top of the table
  3. Click the download button and follow the installation steps for the JDK.
  • IntelliJ IDEA (download here). Simply follow the installation steps

Clone from Github

  1. Launch IntelliJ IDEA. In the main window select “Check out from Version Control” and then select “Git”.
  2. Paste the following link to the source code and click the Clone button
  3. Once the project is open, at the top right corner, if Select SDK appears, click it and then select the latest SDK 1.8 available.
  4. Close the project and return to the start menu of IntelliJ IDEA
  5. Click on Import Project
  6. Select the folder /IdeaProjects/ select folder /Questdb/ and click Open.
  7. In the next window, select Maven and click Next
  8. In the next screen Import project just click Next
  9. And in the next screen again just click Next
  10. On the next screen (SDK) select the latest 1.8 version and click Next
  11. Finally, choose a project name or use the default Parent name and click Finish
  12. If you are warned that this should override an existing folder, click Yes

Build and Run

  1. Look for the module ServerMain. To locate it, press the SHIFT key twice in a row and type Once you found it, double click on it to open it.
  2. If a popup window prompts you to add files to git, tick box never ask again and Cancel
  3. Next to ServerMain is a green arrow. Click it, then click RunServerMain.Main(). You will see at the bottom of your IntelliJ window that the software is building.
  4. The initialisation require you to supply parameters.
     /Library/Java/JavaVirtualMachines/jdk1.8.0_212.jdk/Contents/Home/bin/java " …
     QuestDB server [DEVELOPMENT]
     Copyright (C) Appsicle 2014-2019, all rights reserved.
     Root directory name expected
     Process finished with exit code 0
  5. To do so, locate the ServerMain dropdown at the top of your screen and click on Edit Configuration
  6. In the window that will open, enter Program arguments:  -d "/tmp" -f
  7. Repeat the step 5 (Run Server Main). If you are successful, you will see the following in the logs
     Site content is ready
     2019-06-25T22:03:39.415068Z I IODispatcher listening on 0:9000 [fd=33]
     2019-06-25T22:03:40.221138Z I c.q.c.h.HttpServer os scheduled [name=questdb-http-0]
     2019-06-25T22:03:40.261400Z I c.q.c.h.HttpServer started
     2019-06-25T22:03:40.261505Z I c.q.c.h.HttpServer os scheduled [name=questdb-http-1]
  8. In a web browser, visit http://localhost:9000/index.html which will take you to the administration console. From there you can load, query data and have fun !

Stopping IntelliJ

To stop running QuestDB, simply open IntelliJ and click on the red square.

Getting Test Data

We encourage you to test QuestDB with your own data. If, however, you do not have data available and would like to test QuestDB with test data, you can use some of the following test files:

Simply download the file and, if zipped, extract it. We recommend you do this in the /tpm/ folder.

Web Console

QuestDB offers an intuitive web console to interact with your data without using the API. It allows you to import data in to the database, run queries on the fly, and export data. Each action will display a timer so you can get a sense of the performance.

After starting QuestDB, you can access the console on http://localhost:9000/index.html

Loading Data

The load screen can be accessed by clicking this icon on the left-side menu.

Drag and drop method

Simply drag and drop the data file you wish to import in the drag & drop area:

Browse files method

Click the browse your files button, and select the file you wish to import.

Customising the upload

After you have loaded data with one of the two methods listed above, you will see it in your console. You can now change the import parameters and import again if you wish. The option buttons you are:

Option Description
If selected, data uploaded will be appended to the existing table.
If selected, data uploaded will override the existing data.
If selected, the first row of your import data will be recognised as headers. If not, header names will be generated automatically (f1, f2, etc.)
This button will trigger the upload with the parameters you have selected

Querying Data

The SQL console allows you to run queries on your database directly from the browser. You can access it via this button situated on the left-side menu:

Running a query and seeing the results.

Simply type your query in the editor. Then click

The results will be displayed in the table below the SQL editor. You will also see metrics such as execution time and number of records.



QuestDB REST API is based around standard HTTP features and is understood by off-the-shelf HTTP clients. API functions are keyed on full URL and they use query parameters as their arguments. Responses are function specific, for example you can download query result as CSV file directly from API, whereas some API responses are JSON. Available function are /imp, /exec, /exp and /chk and they are detailed below.

Loading Data

Import function /imp streams tabular text data directly into a table. It supports CSV, TAB and Pipe (|) delimited inputs and optional headers. There are no restrictions on data size. Data type and structure is detected automatically and usually without additional configuration. However in some cases additional configuration can be provided to augment automatic detection results.

/imp column names from header row as table columns. The following characters are removed from column names:

[space] _ ? . , \ \ \\ / \0 : ) ( + - * % ~

When header row is missing column names are generated automatically.


/imp request is HTTP POST, multi-part. It accepts a mixture of form and query arguments:

Argument Remarks
schemaoptional, form URL-encoded string of type hints

schema parameter must always precede data. List of supported field types
datarequired, form Data to stream. If target table name is not explicitly specified, file name will be used. Request will fail if file name is also missing.
nameoptional, query Name of target table. This parameter takes precedence over file name.
overwriteoptional, query Boolean flag. Default value is false. Set it to true to have existing table deleted before appending data.
fmtoptional, query Acceptable value is json to have JSON response. Human readable text response otherwise.
durableoptional, query, boolean When request is durable QuestDB will flush relevant disk cache before responding. Default value is false
atomicityoptional, query Available values are strict and relaxed. Default value is relaxed. When atomicity is relaxed data rows that cannot be appended to table are discared, thus allowing partial uploads. In strict mode upload fails as soon as any data error is encoutered and all previously appended rows are rolled back. There is additional server parameter http.abort.broken.uploads that governs if server will continue to receive all of the data to completion or close socket right away

When server is configured to obey HTTP protocol and receive incoming data even though it knows data is useless, it will be doing so with reduced amount of data processing. It will be parsing multipart data because it has to, but delimited file will not be parsed, speeding up the proccess.

When http.abort.broken.uploads is set to true server will close socket as soon as it detects data error. This usually forces clients to stop sending data, saves waiting time and frees up server resources immediately. The downside is that some HTTP clients will not receive diagnostic message, even though server does send it in all cases.

Field types

Field types that can be specified in schema are combinations of physical data type and text format.

Imported Type Data Type Format
BOOLEAN BOOLEAN case-insensitive match with `true`
BYTE BYTE -127 to 127 int
DOUBLE DOUBLE floating point value
FLOAT FLOAT floating point value
DATE_ISO DATE 2016-05-10T14:55:11.123Z
DATE_1 DATE 2016-05-10 14:55:11


The following examples upload ratings.csv, which can be found here Response shows table name, columns, types, error count in each column and total rows. When column types are correct error count must be zero.

Basic import:
mpb:ml-latest user$ curl -i -F [email protected] http://localhost:9000/imp


HTTP/1.1 200 OK
Server: questDB/1.0
Date: Fri, 28 Oct 2016 17:58:31 GMT
Transfer-Encoding: chunked
Content-Type: text/plain; charset=utf-8

|      Location:  |               /Users/vlad/dev/data/db/ratings.csv  |    Errors  |
|   Partition by  |                                              NONE  |            |
|   Rows handled  |                                          22884377  |            |
|  Rows imported  |                                          22884377  |            |
|              0  |                                     userId INT(4)  |         0  |
|              1  |                                    movieId INT(4)  |         0  |
|              2  |                                  rating DOUBLE(8)  |         0  |
|              3  |                                  timestamp INT(4)  |         0  |

JSON response for the same request would be:

{ "status": "OK", "location": "ratings.csv", "rowsRejected": 0, "rowsImported": 22884377, "columns": [ { "name": "userId", "type": "INT", "size": 4, "errors": 0 }, { "name": "movieId", "type": "INT", "size": 4, "errors": 0 }, { "name": "rating", "type": "DOUBLE", "size": 8, "errors": 0 }, { "name": "timestamp", "type": "INT", "size": 4, "errors": 0 } ] }
Import with schema:

This example overrides types of userId and movieId by including schema parameter:

mbp:ml-latest user$ curl -i -F 'schema=userId=STRING&movieId=STRING' -F [email protected] http://localhost:9000/imp


HTTP/1.1 200 OK
Server: questDB/1.0
Date: Sun, 30 Oct 2016 1:20:7 GMT
Transfer-Encoding: chunked
Content-Type: text/plain; charset=utf-8

|      Location:  |               /Users/vlad/dev/data/db/ratings.csv  |    Errors  |
|   Partition by  |                                              NONE  |            |
|   Rows handled  |                                          22884377  |            |
|  Rows imported  |                                          22884377  |            |
|              0  |                                 userId STRING(16)  |         0  |
|              1  |                                movieId STRING(16)  |         0  |
|              2  |                                  rating DOUBLE(8)  |         0  |
|              3  |                                  timestamp INT(4)  |         0  |
Import with several parameters:

This example shows the concatenation of several import parameters

curl -i -F [email protected] 'http://localhost:9000/imp?forceHeaders=true&overwrite=true'


/imp function is fully ACID compliant, although Atomicity and Durability can be relaxed to meet convenience and performance demands.

Atomicity is fully insured against any connection problems. If server detects closed socket the entire request is rolled back instantly and transparently for any existing readers. The only time data can be partially imported is when atomicity is in relaxed mode and data cannot be converted to column type. In this scenario “defective” row of data is discarded and /imp continues to stream request data into table.

Consistency is guaranteed by consistency of append transactions against QuestDB storage engine.

Isolation Data is committed to QuestDB storage engine at end of request. Uncommitted transactions are not visible to readers.

Durability. /imp streams data from network socket buffer directly into memory mapped files. At this point data is handed over to the OS and is resilient against QuestDB internal errors and unlikely but hypothetically possible crashes. This is default method of appending data and it is chosen for its performance characteristics. In cases where transaction has to be resilient against OS errors or power losses physical durability can be enforced. At a cost of append performance QuestDB storage engine will also guarantee that each memory block is flushed to physical device.

Querying data

/exec compiles and executes SQL query text supplied as an argument and returns JSON object with either data or error. Error object contains message and position in query text. Position is a number of characters from beginning of query where error occurred.

The result of a successful execution is a JSON object containing an array of data rows. Each data row is array of column values. Dataset metadata is returned in columns field - list of column names and their types.

Query execution terminates automatically when the socket connection is closed.


/exec is HTTP GET request with following query arguments:

Argument Remarks
queryrequired URL-encoded query text. It can be multi-line, but query separator, such as ; must not be included.
limitoptional This argument is used for paging. Limit can be either in format of X,Y where X is the lower limit and Y is the upper, or just Y. For example, limit=10,20 will return row numbers 10 thru to 20 inclusive. and limit=20 will return first 20 rows, which is equvalent to limit=0,20
countoptional, boolean Instructs /exec to count rows and return this value in message header. Default value is false. There is slight performance hit for requesting row count.
nmoptional, boolean Skips metadata section of the response when true. When metadata is known and client is paging this flag should typically be set to true to reduce response size. Default value is false and metadata is included in the response.

Success response

This is an example of successful query execution response. HTTP status code 200.

{ "query": "select AccidentIndex, Date, Time from 'Accidents0514.csv' limit 2", "columns": [ { "name": "AccidentIndex", "type": "STRING" }, { "name": "Date", "type": "DATE" }, { "name": "Time", "type": "STRING" } ], "dataset": [ [ "200501BS00001", "2005-01-04T00:00:00.000Z", "17:42" ], [ "200501BS00002", "2005-01-05T00:00:00.000Z", "17:36" ], ], "count": 2 }

Error response

Example of error response. HTTP status code 400 is used for query errors and 500 for internal server errors, which should not normally occur.

{ "query": "\nselect AccidentIndex, Date, Time2 from 'Accidents0514.csv' limit 10", "error": "Invalid column: Time2", "position": 29 }

Exporting data

QuestDB allows exporting results of query execution. Function /exp does exactly that. The exported data is saved in a CSV (comma delimited) file with Unix line ends \n. Once the file is generated, it will be available for download as opposed to being displayed in the browser.

Server responds with HTTP 200 when query execution is successful and 400 when there is error and returns error text.


/exp is HTTP GET request with following query arguments:

Argument Remarks
queryrequired URL-encoded query text. It can be multi-line, but query separator, such as ; must not be included.
limitoptional This argument is used for paging. Limit can be either in format of X,Y where X is the lower limit and Y is the upper, or just Y. For example, limit=10,20 will return row numbers 10 thru to 20 inclusive. and limit=20 will return first 20 rows, which is equvalent to limit=0,20

Success response

Below is example of exporting data from command line using curl

mbp:~ vlad$ curl -v -G http://localhost:9000/exp \
                 --data-urlencode "query=select AccidentIndex2, Date, Time from 'Accidents0514.csv'" \
                 -d limit=5


*   Trying ::1...
* connect to ::1 port 9000 failed: Connection refused
*   Trying
* Connected to localhost ( port 9000 (#0)
> GET /exp?query=select%20AccidentIndex%2C%20Date%2C%20Time%20from%20%27Accidents0514.csv%27&limit=5 HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.49.1
> Accept: */*
< HTTP/1.1 200 OK
< Server: questDB/1.0
< Date: Wed, 9 Nov 2016 17:58:54 GMT
< Transfer-Encoding: chunked
< Content-Type: text/csv; charset=utf-8
< Content-Disposition: attachment; filename="questdb-query-1478714334308.csv"
* Connection #0 to host localhost left intact

Error response

When query contains syntax errors /exp attempts to return as much diagnostic information as possible. Example erroneous request:

mbp:ui vlad$ curl -v -G http://localhost:9000/exp \
>                  --data-urlencode "query=select AccidentIndex2, Date, Time from 'Accidents0514.csv'" \
>                  -d limit=5


*   Trying ::1...
* connect to ::1 port 9000 failed: Connection refused
*   Trying
* Connected to localhost ( port 9000 (#0)
> GET /exp?query=select%20AccidentIndex2%2C%20Date%2C%20Time%20from%20%27Accidents0514.csv%27&limit=5 HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.49.1
> Accept: */*
< HTTP/1.1 400 Bad request
< Server: questDB/1.0
< Date: Wed, 9 Nov 2016 18:3:55 GMT
< Transfer-Encoding: chunked
< Content-Type: text/csv; charset=utf-8
< Content-Disposition: attachment; filename="questdb-query-1478714635400.csv"
Error at(7): Invalid column: AccidentIndex2
* Connection #0 to host localhost left intact

Query Language

QuestDB SQL is an interpretation of standard SQL that works with time series and temporal data. Basics of QuestDB SQL should be immediately familiar to a seasoned SQL user and our extensions should be an easy feat.

Selecting columns

select clause allows you to specify list of columns and expressions to be selected and evaluated from a table. select_statement ::=

All columns

QuestDB has optional select .. from clause. Omit it to get all columns.


Standard SQL equivalent is select * from ratings

Web Console output:

List of columns

You can select comma-separated list of column names. Following example selects columns movieId and rating from table ratings

select movieId, rating from ratings

Web Console output:


select clause can evaluate multiple expressions and functions. In this respect it is similar to standard SQL. Mix comma separated list of arithmetic expressions with column names you are selecting.


select movieId, (100 - rating)*2 xyz, rating > 3.5 good from ratings

Web Console output:

Column aliases

Column alias is a literal that follows column name or expression after a space. Aliases are used to give expressions names, rename columns and they must be unique together with existing column names.

select movieId abc, rating xyz from ratings

Web Console output:


QuestDB evaluates aggregation functions without need for traditional group by. Simply use a mix of column names and aggregation functions in select clause. You can have any number of discrete value columns and any number of aggregation functions.

select movieId, avg(rating), count() from ratings

Web Console output:

Aggregation arithmetic

Aggregation functions can be used in arithmetic expressions. The following computes mid of rating values for every movie.

select movieId, (min(rating) + max(rating))/2 mid, count() count from ratings

Aggregate Functions

Currently implemented aggregate functions:

Function Remarks
sum(expr) Sums values of numeric expr. Return type is double for expr of types double,float; int for expr of types byte, short,int and long for expr of type long
lsum(expr) Sums values of expr and returns long. This function can sum int values without overflowing the result.
avg(expr) Calculates average value of expr.
max(expr) Calculates maximum value of expr. Return type is the same as type of expr
min(expr) Calculates minimum value of expr. Return type is the same as type of expr
first(expr) First value of expr in natural order of records.
last(expr) Last value of expr in natural order of records.
count() Calculates count of records in query.
count(expr) Calculates count of records with non-null values of expr
var(expr) Calculates variance of expr.
stddev(expr) Calculates standard deviation of values of expr


QuestDB supports standard where clause for filtering data. Filter expressions are required to return boolean result.

filtered_statement ::=


List of supported boolean operators:

Operator Remarks
andbinary Logical AND operator.
orbinary Logical OR opeator.
notunary Logical NOT opeator. Example where not(x > 10)
<binary Numerical LESS than.
>binary Numerical GREATER than.
<=binary Numerical LESS than OR EQUAL.
>=binary Numerical GREATER than OR EQUAL.
=binary Equality operator applicable to all types. where x = 'ABC' and y = 10
eq3 args Equality operator applicable to double type. It compares two double values with a delta. where eq(x, 0.5, 0.000001). This is logically equivalent to abs(x-0.5) < 0.000001
!=binary Non-equality operator applicable to all types.
inbinary Returns true when value of left argument is one of list of values of right argument. where x in ('ABC','CDE','HJK'). This operator is performs better than logically equivalent where x = 'ABC' or x = 'CDE' or x = 'HJK'
~binary Returns true when value of left argument matches regular expression. where x ~ 'THE' Here THE is the pattern. In this case it matches all strings that contain THE. For more details on the pattern please refer to Java Pattern documentation

Timestamp search

Use = operator and UTC date string for exact timestamp matches:

ratings where timestamp = '2010-01-12T00:02:26.000Z'

QuestDB SQL optimiser will create more more efficient plan when data is time series naturally ordered by timestamp.

Interval search

Using >,>=,<,<= operators:

ratings3 where timestamp > '2010-01-12T00:00:00.000Z' and timestamp < '2010-01-12T00:59:59.999Z'

Using in operator, for example:

ratings where timestamp in ('2010-01-12T00:00:00.000Z', '2010-01-12T00:59:59.999Z')

in is inclusive of edges and supports exactly two UTC timestamps.

Using = operator and partial UTC timestamp string. Example below selects data between 14:00 and 14:59 on 12 January 2010:

ratings where timestamp = '2010-01-12T14'

To obtain intervals UTC timestamps can be truncated at seconds, minutes, days, months and years. This example selects data for the whole of 2011:

ratings where timestamp = '2011'

Using = operator and interval length modifier to specify longer than single time unit intervals. This example selects 2pm, 3pm and 4pm data:

ratings where timestamp = '2010-01-12T14;2h'

QuestDB uses the following algorithm to create interval: 2010-01-12T14 is translated into natural interval [2010-01-12T14:00:00.000Z, 2010-01-12T14:59:59.999Z], then 2 hours added to the upper bound resulting in [2010-01-12T14:00:00.000Z, 2010-01-12T16:59:59.999Z] interval.

Interval modifier format is:

where letters stand for:

Modifier Remarks
s Seconds
m Minutes
h Hours
d Days
M Months
y Years

Floating point value search

You can use eq operator to compare double and float values with tolerance to avoid rounding problems. For example:

prices where eq(bid, 1.56, 0.000001)

You can also use =:

prices where bid = 1.56

but = would not match 1.56 and 1.559999999999.

Search within aggregation results

To filter on aggregation results in QuestDB SQL use sub-queries. It is fairly simple because of optional select .. from. Following example selects all movies that received over 50,000 ratings.

(select movieId x, (min(rating) + max(rating))/2 mid, count() count from ratings) where count > 50000

Standard SQL equivalent would be:

select movieId, (min(rating) + max(rating))/2 mid, count() count from ratings group by movieId having count() > 50000

Here is Web Console example of the above: