diff --git a/.travis.yml b/.travis.yml index a045705..3b4adb2 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,17 +1,16 @@ language: elixir + elixir: - - 1.1.1 - - 1.2.1 + - 1.3 + otp_release: - - 18.0 - - 18.1 -sudo: required -before_install: - - source /etc/lsb-release && echo "deb http://download.rethinkdb.com/apt $DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/rethinkdb.list - - wget -qO- http://download.rethinkdb.com/apt/pubkey.gpg | sudo apt-key add - - - sudo apt-get update -qq - - sudo apt-get install rethinkdb -y --force-yes -before_script: rethinkdb --daemon + - 18.3 + - 19.1 + install: + - mix local.rebar --force - mix local.hex --force - mix deps.get --only test + +addons: + rethinkdb: '2.3' diff --git a/README.md b/README.md index 4e67ea4..2609ecb 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ RethinkDB [![Build Status](https://travis-ci.org/hamiltop/rethinkdb-elixir.svg?branch=master)](https://travis-ci.org/hamiltop/rethinkdb-elixir) =========== +UPDATE: I am not actively developing this. + Multiplexed RethinkDB client in pure Elixir. If you are coming here from elixir-rethinkdb, welcome! @@ -7,60 +9,95 @@ If you were expecting `Exrethinkdb` you are in the right place. We decided to ch I just set up a channel on the Elixir slack, so if you are on there join #rethinkdb. -###Recent changes +### Recent changes + +#### 0.4.0 -####0.2.0 -* Pruned a lot of connection calls that were public. -* Added exponential backoff to connection -* Added supervised changefeeds -* Pruned and polished docs +* Extract Changefeed out into separate [package](https://github.com/hamiltop/rethinkdb_changefeed) +* Accept keyword options with queries -##Getting Started +## Getting Started See [API documentation](http://hexdocs.pm/rethinkdb/) for more details. -###Connection +### Connection -Connections are managed by a process. Start the process by calling `start_link/1`. See [documentation for `Connection.start_link/1`](http://hexdocs.pm/rethinkdb/RethinkDB.Connection.html#start_link/1) for supported options. +Connections are managed by a process. Start the process by calling `start_link/1`. See [documentation for `Connection.start_link/1`](http://hexdocs.pm/rethinkdb/RethinkDB.Connection.html#start_link/1) for supported options. + +#### Basic Remote Connection -####Basic Remote Connection ```elixir {:ok, conn} = RethinkDB.Connection.start_link([host: "10.0.0.17", port: 28015]) ``` -####Named Connection +#### Named Connection + ```elixir -{:ok, conn} = RethinkDB.Connection.start_link([name: :foo]}) +{:ok, conn} = RethinkDB.Connection.start_link([name: :foo]) ``` -####Supervised Connection +#### Supervised Connection + Start the supervisor with: + ```elixir worker(RethinkDB.Connection, [[name: :foo]]) worker(RethinkDB.Connection, [[name: :bar, host: 'localhost', port: 28015]]) ``` -####Default Connection +#### Default Connection + An `RethinkDB.Connection` does parallel queries via pipelining. It can and should be shared among multiple processes. Because of this, it is common to have one connection shared in your application. To create a default connection, we create a new module and `use RethinkDB.Connection`. + ```elixir defmodule FooDatabase do use RethinkDB.Connection end ``` + This connection can be supervised without a name (it will assume the module as the name). + ```elixir worker(FooDatabase, []) ``` + Queries can be run without providing a connection (it will use the name connection). + ```elixir import RethinkDB.Query table("people") |> FooDatabase.run ``` -###Query +#### Connection Pooling + +To use a connection pool, add Poolboy to your dependencies: + +```elixir +{:poolboy, "~> 1.5"} +``` + +Then, in your supervision tree, add: + +```elixir +worker(:poolboy, [[name: {:local, :rethinkdb_pool}, worker_module: RethinkDB.Connection, size: 10, max_overflow: 0], []) +``` + +NOTE: If you want to use changefeeds or any persistent queries, `max_overflow: 0` is required. + +Then use it in your code: + +```elixir +db = :poolboy.checkout(:rethinkdb_pool) +table("people") |> db +:poolboy.checkin(:rethinkdb_pool, db) +``` + +### Query + `RethinkDB.run/2` accepts a process as the second argument (to facilitate piping). -####Insert +#### Insert + ```elixir q = Query.table("people") @@ -68,17 +105,20 @@ q = Query.table("people") |> RethinkDB.run conn ``` -####Filter +#### Filter + ```elixir q = Query.table("people") |> Query.filter(%{last_name: "Smith"}) |> RethinkDB.run conn ``` -####Functions +#### Functions + RethinkDB supports RethinkDB functions in queries. There are two approaches you can take: Use RethinkDB operators + ```elixir import RethinkDB.Query @@ -86,6 +126,7 @@ make_array([1,2,3]) |> map(fn (x) -> add(x, 1) end) ``` Use Elixir operators via the lambda macro + ```elixir require RethinkDB.Lambda import RethinkDB.Lambda @@ -93,7 +134,8 @@ import RethinkDB.Lambda make_array([1,2,3]) |> map(lambda fn (x) -> x + 1 end) ``` -####Map +#### Map + ```elixir require RethinkDB.Lambda import Query @@ -110,12 +152,41 @@ table("people") See [query.ex](lib/rethinkdb/query.ex) for more basic queries. If you don't see something supported, please open an issue. We're moving fast and any guidance on desired features is helpful. -###Changes +#### Indexes + +```elixir +# Simple indexes +# create +result = Query.table("people") + |> Query.index_create("first_name", Lambda.lambda fn(row) -> row["first_name"] end) + |> RethinkDB.run conn + +# retrieve +result = Query.table("people") + |> Query.get_all(["Will"], index: "first_name") + |> RethinkDB.run conn + + +# Compound indexes +# create +result = Query.table("people") + |> Query.index_create("full_name", Lambda.lambda fn(row) -> [row["first_name"], row["last_name"]] end) + |> RethinkDB.run conn + +# retrieve +result = Query.table("people") + |> Query.get_all([["Will", "Smith"], ["James", "Bond"]], index: "full_name") + |> RethinkDB.run conn +``` + +One limitation we have in Elixir is that we don't support varargs. So in JavaScript you would do `getAll(key1, key2, {index: "uniqueness"})`. In Elixir we have to do `get_all([key1, key2], index: "uniqueness")`. With a single key it becomes `get_all([key1], index: "uniqueness")` and when `key1` is `[partA, partB]` you have to do `get_all([[partA, partB]], index: "uniqueness")` + +### Changes Change feeds can be consumed either incrementally (by calling `RethinkDB.next/1`) or via the Enumerable Protocol. ```elixir -q = Query.table("people") +results = Query.table("people") |> Query.filter(%{last_name: "Smith"}) |> Query.changes |> RethinkDB.run conn @@ -124,12 +195,13 @@ first_change = RethinkDB.next results # get stream, chunked in groups of 5, Inspect results |> Stream.chunk(5) |> Enum.each &IO.inspect/1 ``` -###Supervised Changefeeds -Changefeeds have been moved to their own repo to enable independent release -cycles. See https://github.com/hamiltop/rethinkdb_changefeed +### Supervised Changefeeds + +Supervised Changefeeds (an OTP behavior for running a changefeed as a process) have been moved to their own repo to enable independent release cycles. See https://github.com/hamiltop/rethinkdb_changefeed + +### Roadmap -###Roadmap Version 1.0.0 will be limited to individual connections and implement the entire documented ReQL (as of rethinkdb 2.0) While not provided by this library, we will also include example code for: @@ -139,12 +211,14 @@ While not provided by this library, we will also include example code for: The goal for 1.0.0 is to be stable. Issues have been filed for work that needs to be completed before 1.0.0 and tagged with the 1.0.0 milestone. -###Example Apps +### Example Apps + Checkout the wiki page for various [example apps](https://github.com/hamiltop/rethinkdb-elixir/wiki/Example-Apps) +### Contributing -###Contributing Contributions are welcome. Take a look at the Issues. Anything that is tagged `Help Wanted` or `Feedback Wanted` is a good candidate for contributions. Even if you don't know where to start, respond to an interesting issue and you will be pointed in the right direction. -####Testing +#### Testing + Be intentional. Whether you are writing production code or tests, make sure there is value in the test being written. diff --git a/lib/rethinkdb/connection.ex b/lib/rethinkdb/connection.ex index d4ea218..7d9e906 100644 --- a/lib/rethinkdb/connection.ex +++ b/lib/rethinkdb/connection.ex @@ -93,6 +93,11 @@ defmodule RethinkDB.Connection do * `timeout` - How long to wait for a response * `db` - Default database to use for query. Can also be specified as part of the query. + * `durability` - possible values are 'hard' and 'soft'. In soft durability mode RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. + * `noreply` - set to true to not receive the result object or cursor and return immediately. + * `profile` - whether or not to return a profile of the query’s execution (default: false). + * `time_format` - what format to return times in (default: :native). Set this to :raw if you want times returned as JSON objects for exporting. + * `binary_format` - what format to return binary data in (default: :native). Set this to :raw if you want the raw pseudotype. """ def run(query, conn, opts \\ []) do timeout = Dict.get(opts, :timeout, 5000) @@ -107,7 +112,7 @@ defmodule RethinkDB.Connection do false -> {:query, query} end case Connection.call(conn, msg, timeout) do - {response, token} -> RethinkDB.Response.parse(response, token, conn) + {response, token} -> RethinkDB.Response.parse(response, token, conn, opts) :noreply -> :ok result -> result end @@ -119,9 +124,9 @@ defmodule RethinkDB.Connection do Since a feed is tied to a particular connection, no connection is needed when calling `next`. """ - def next(%{token: token, pid: pid}) do + def next(%{token: token, pid: pid, opts: opts}) do case Connection.call(pid, {:continue, token}, :infinity) do - {response, token} -> RethinkDB.Response.parse(response, token, pid) + {response, token} -> RethinkDB.Response.parse(response, token, pid, opts) x -> x end end @@ -134,7 +139,7 @@ defmodule RethinkDB.Connection do """ def close(%{token: token, pid: pid}) do {response, token} = Connection.call(pid, {:stop, token}, :infinity) - RethinkDB.Response.parse(response, token, pid) + RethinkDB.Response.parse(response, token, pid, []) end @doc """ @@ -142,7 +147,7 @@ defmodule RethinkDB.Connection do """ def noreply_wait(conn, timeout \\ 5000) do {response, token} = Connection.call(conn, :noreply_wait, timeout) - case RethinkDB.Response.parse(response, token, conn) do + case RethinkDB.Response.parse(response, token, conn, []) do %RethinkDB.Response{data: %{"t" => 4}} -> :ok r -> r end diff --git a/lib/rethinkdb/lambda.ex b/lib/rethinkdb/lambda.ex index 0f6c256..a047532 100644 --- a/lib/rethinkdb/lambda.ex +++ b/lib/rethinkdb/lambda.ex @@ -53,6 +53,8 @@ defmodule RethinkDB.Lambda do quote do Query.branch(unquote(expr), unquote(truthy), unquote(falsy)) end + {:if, _, _} -> + raise "You must include an else condition when using if in a ReQL Lambda" x -> x end end diff --git a/lib/rethinkdb/prepare.ex b/lib/rethinkdb/prepare.ex index 10cf96a..ce0b0f7 100644 --- a/lib/rethinkdb/prepare.ex +++ b/lib/rethinkdb/prepare.ex @@ -39,6 +39,10 @@ defmodule RethinkDB.Prepare do {[k,v], state} end defp prepare(el, state) do - {el, state} + if is_binary(el) and not String.valid?(el) do + {RethinkDB.Query.binary(el), state} + else + {el, state} + end end end diff --git a/lib/rethinkdb/pseudotypes.ex b/lib/rethinkdb/pseudotypes.ex index 38275a7..91fa3ec 100644 --- a/lib/rethinkdb/pseudotypes.ex +++ b/lib/rethinkdb/pseudotypes.ex @@ -4,8 +4,13 @@ defmodule RethinkDB.Pseudotypes do @moduledoc false defstruct data: nil - def parse(%{"$reql_type$" => "BINARY", "data" => data}) do - %__MODULE__{data: :base64.decode(data)} + def parse(%{"$reql_type$" => "BINARY", "data" => data}, opts) do + case Dict.get(opts, :binary_format) do + :raw -> + %__MODULE__{data: data} + _ -> + :base64.decode(data) + end end end @@ -23,7 +28,7 @@ defmodule RethinkDB.Pseudotypes do defmodule Polygon do @moduledoc false - defstruct outer_coordinates: [], inner_coordinates: [] + defstruct coordinates: [] end def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => [x,y], "type" => "Point"}) do @@ -33,11 +38,7 @@ defmodule RethinkDB.Pseudotypes do %Line{coordinates: Enum.map(coords, &List.to_tuple/1)} end def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => coords, "type" => "Polygon"}) do - {outer, inner} = case coords do - [outer, inner] -> {Enum.map(outer, &List.to_tuple/1), Enum.map(inner, &List.to_tuple/1)} - [outer | []] -> {Enum.map(outer, &List.to_tuple/1), []} - end - %Polygon{outer_coordinates: outer, inner_coordinates: inner} + %Polygon{coordinates: (for points <- coords, do: Enum.map points, &List.to_tuple/1)} end end @@ -45,33 +46,56 @@ defmodule RethinkDB.Pseudotypes do @moduledoc false defstruct epoch_time: nil, timezone: nil - def parse(%{"$reql_type$" => "TIME", "epoch_time" => epoch_time, "timezone" => timezone}) do - %__MODULE__{epoch_time: epoch_time, timezone: timezone} + def parse(%{"$reql_type$" => "TIME", "epoch_time" => epoch_time, "timezone" => timezone}, opts) do + case Dict.get(opts, :time_format) do + :raw -> + %__MODULE__{epoch_time: epoch_time, timezone: timezone} + _ -> + {seconds, ""} = Calendar.ISO.parse_offset(timezone) + zone_abbr = case seconds do + 0 -> "UTC" + _ -> timezone + end + negative = seconds < 0 + seconds = abs(seconds) + time_zone = case {div(seconds,3600),rem(seconds,3600)} do + {0,0} -> "Etc/UTC" + {hours,0} -> + "Etc/GMT" <> if negative do "+" else "-" end <> Integer.to_string(hours) + {hours,seconds} -> + "Etc/GMT" <> if negative do "+" else "-" end <> Integer.to_string(hours) <> ":" <> + String.pad_leading(Integer.to_string(seconds), 2, "0") + end + epoch_time * 1000 + |> trunc() + |> DateTime.from_unix!(:milliseconds) + |> struct(utc_offset: seconds, zone_abbr: zone_abbr, time_zone: time_zone) + end end end - def convert_reql_pseudotypes(nil), do: nil - def convert_reql_pseudotypes(%{"$reql_type$" => "BINARY"} = data) do - Binary.parse(data) + def convert_reql_pseudotypes(nil, opts), do: nil + def convert_reql_pseudotypes(%{"$reql_type$" => "BINARY"} = data, opts) do + Binary.parse(data, opts) end - def convert_reql_pseudotypes(%{"$reql_type$" => "GEOMETRY"} = data) do + def convert_reql_pseudotypes(%{"$reql_type$" => "GEOMETRY"} = data, opts) do Geometry.parse(data) end - def convert_reql_pseudotypes(%{"$reql_type$" => "GROUPED_DATA"} = data) do + def convert_reql_pseudotypes(%{"$reql_type$" => "GROUPED_DATA"} = data, opts) do parse_grouped_data(data) end - def convert_reql_pseudotypes(%{"$reql_type$" => "TIME"} = data) do - Time.parse(data) + def convert_reql_pseudotypes(%{"$reql_type$" => "TIME"} = data, opts) do + Time.parse(data, opts) end - def convert_reql_pseudotypes(list) when is_list(list) do - Enum.map(list, &convert_reql_pseudotypes/1) + def convert_reql_pseudotypes(list, opts) when is_list(list) do + Enum.map(list, fn data -> convert_reql_pseudotypes(data, opts) end) end - def convert_reql_pseudotypes(map) when is_map(map) do + def convert_reql_pseudotypes(map, opts) when is_map(map) do Enum.map(map, fn {k, v} -> - {k, convert_reql_pseudotypes(v)} + {k, convert_reql_pseudotypes(v, opts)} end) |> Enum.into(%{}) end - def convert_reql_pseudotypes(string), do: string + def convert_reql_pseudotypes(string, opts), do: string def parse_grouped_data(%{"$reql_type$" => "GROUPED_DATA", "data" => data}) do Enum.map(data, fn ([k, data]) -> diff --git a/lib/rethinkdb/q.ex b/lib/rethinkdb/q.ex index 1eb8fe2..0793213 100644 --- a/lib/rethinkdb/q.ex +++ b/lib/rethinkdb/q.ex @@ -1,9 +1,39 @@ defmodule RethinkDB.Q do @moduledoc false + defstruct query: nil end + defimpl Poison.Encoder, for: RethinkDB.Q do def encode(%{query: query}, options) do Poison.Encoder.encode(query, options) end end + +defimpl Inspect, for: RethinkDB.Q do + @external_resource term_info = Path.join([__DIR__, "query", "term_info.json"]) + + @apidef term_info + |> File.read!() + |> Poison.decode!() + |> Enum.into(%{}, fn {key, val} -> {val, key} end) + + def inspect(%RethinkDB.Q{query: [69, [[2, refs], lambda]]}, _) do + # Replaces references within lambda functions + # with capture syntax arguments (&1, &2, etc). + refs + |> Enum.map_reduce(1, &{{&1, "&#{&2}"}, &2 + 1}) + |> elem(0) + |> Enum.reduce("&(#{inspect lambda})", fn {ref, var}, lambda -> String.replace(lambda, "var(#{inspect ref})", var) end) + end + + def inspect(%RethinkDB.Q{query: [index, args, opts]}, _) do + # Converts function options (map) to keyword list. + Kernel.inspect(%RethinkDB.Q{query: [index, args ++ [Map.to_list(opts)]]}) + end + + def inspect(%RethinkDB.Q{query: [index, args]}, _options) do + # Resolve index & args and return them as string. + Map.get(@apidef, index) <> "(#{Enum.join(Enum.map(args, &Kernel.inspect/1), ", ")})" + end +end diff --git a/lib/rethinkdb/query.ex b/lib/rethinkdb/query.ex index 2330198..bdf929c 100644 --- a/lib/rethinkdb/query.ex +++ b/lib/rethinkdb/query.ex @@ -30,12 +30,12 @@ defmodule RethinkDB.Query do # @doc """ - Takes a stream and partitions it into multiple groups based on the fields or + Takes a stream and partitions it into multiple groups based on the fields or functions provided. - With the multi flag single documents can be assigned to multiple groups, - similar to the behavior of multi-indexes. When multi is True and the grouping - value is an array, documents will be placed in each group that corresponds to + With the multi flag single documents can be assigned to multiple groups, + similar to the behavior of multi-indexes. When multi is True and the grouping + value is an array, documents will be placed in each group that corresponds to the elements of the array. If the array is empty the row will be ignored. """ @spec group(Q.reql_array, Q.reql_func1 | Q.reql_string | [Q.reql_func1 | Q.reql_string] ) :: Q.t @@ -43,12 +43,12 @@ defmodule RethinkDB.Query do operate_on_two_args(:group, 144, opts: true) @doc """ - Takes a grouped stream or grouped data and turns it into an array of objects - representing the groups. Any commands chained after ungroup will operate on - this array, rather than operating on each group individually. This is useful if + Takes a grouped stream or grouped data and turns it into an array of objects + representing the groups. Any commands chained after ungroup will operate on + this array, rather than operating on each group individually. This is useful if you want to e.g. order the groups by the value of their reduction. - The format of the array returned by ungroup is the same as the default native + The format of the array returned by ungroup is the same as the default native format of grouped data in the JavaScript driver and data explorer. end """ @@ -56,7 +56,7 @@ defmodule RethinkDB.Query do operate_on_single_arg(:ungroup, 150) @doc """ - Produce a single value from a sequence through repeated application of a + Produce a single value from a sequence through repeated application of a reduction function. The reduction function can be called on: @@ -65,33 +65,33 @@ defmodule RethinkDB.Query do * one element of the sequence and one result of a previous reduction * two results of previous reductions - The reduction function can be called on the results of two previous - reductions because the reduce command is distributed and parallelized across - shards and CPU cores. A common mistaken when using the reduce command is to + The reduction function can be called on the results of two previous + reductions because the reduce command is distributed and parallelized across + shards and CPU cores. A common mistaken when using the reduce command is to suppose that the reduction is executed from left to right. """ @spec reduce(Q.reql_array, Q.reql_func2) :: Q.t operate_on_two_args(:reduce, 37) @doc """ - Counts the number of elements in a sequence. If called with a value, counts - the number of times that value occurs in the sequence. If called with a - predicate function, counts the number of elements in the sequence where that + Counts the number of elements in a sequence. If called with a value, counts + the number of times that value occurs in the sequence. If called with a + predicate function, counts the number of elements in the sequence where that function returns `true`. - If count is called on a binary object, it will return the size of the object + If count is called on a binary object, it will return the size of the object in bytes. """ @spec count(Q.reql_array) :: Q.t operate_on_single_arg(:count, 43) @spec count(Q.reql_array, Q.reql_string | Q.reql_func1) :: Q.t operate_on_two_args(:count, 43) - + @doc """ - Sums all the elements of a sequence. If called with a field name, sums all - the values of that field in the sequence, skipping elements of the sequence - that lack that field. If called with a function, calls that function on every - element of the sequence and sums the results, skipping elements of the sequence + Sums all the elements of a sequence. If called with a field name, sums all + the values of that field in the sequence, skipping elements of the sequence + that lack that field. If called with a function, calls that function on every + element of the sequence and sums the results, skipping elements of the sequence where that function returns `nil` or a non-existence error. Returns 0 when called on an empty sequence. @@ -102,13 +102,13 @@ defmodule RethinkDB.Query do operate_on_two_args(:sum, 145) @doc """ - Averages all the elements of a sequence. If called with a field name, - averages all the values of that field in the sequence, skipping elements of the - sequence that lack that field. If called with a function, calls that function - on every element of the sequence and averages the results, skipping elements of + Averages all the elements of a sequence. If called with a field name, + averages all the values of that field in the sequence, skipping elements of the + sequence that lack that field. If called with a function, calls that function + on every element of the sequence and averages the results, skipping elements of the sequence where that function returns None or a non-existence error. - Produces a non-existence error when called on an empty sequence. You can + Produces a non-existence error when called on an empty sequence. You can handle this case with `default`. """ @spec avg(Q.reql_array) :: Q.t @@ -119,15 +119,15 @@ defmodule RethinkDB.Query do @doc """ Finds the minimum element of a sequence. The min command can be called with: - * a field name, to return the element of the sequence with the smallest value in + * a field name, to return the element of the sequence with the smallest value in that field; - * an index option, to return the element of the sequence with the smallest value in that + * an index option, to return the element of the sequence with the smallest value in that index; - * a function, to apply the function to every element within the sequence and - return the element which returns the smallest value from the function, ignoring + * a function, to apply the function to every element within the sequence and + return the element which returns the smallest value from the function, ignoring any elements where the function returns None or produces a non-existence error. - Calling min on an empty sequence will throw a non-existence error; this can be + Calling min on an empty sequence will throw a non-existence error; this can be handled using the `default` command. """ @spec min(Q.reql_array, Q.reql_opts | Q.reql_string | Q.reql_func1) :: Q.t @@ -137,15 +137,15 @@ defmodule RethinkDB.Query do @doc """ Finds the maximum element of a sequence. The max command can be called with: - * a field name, to return the element of the sequence with the smallest value in + * a field name, to return the element of the sequence with the smallest value in that field; - * an index, to return the element of the sequence with the smallest value in that + * an index, to return the element of the sequence with the smallest value in that index; - * a function, to apply the function to every element within the sequence and - return the element which returns the smallest value from the function, ignoring + * a function, to apply the function to every element within the sequence and + return the element which returns the smallest value from the function, ignoring any elements where the function returns None or produces a non-existence error. - Calling max on an empty sequence will throw a non-existence error; this can be + Calling max on an empty sequence will throw a non-existence error; this can be handled using the `default` command. """ @spec max(Q.reql_array, Q.reql_opts | Q.reql_string | Q.reql_func1) :: Q.t @@ -155,16 +155,16 @@ defmodule RethinkDB.Query do @doc """ Removes duplicates from elements in a sequence. - The distinct command can be called on any sequence, a table, or called on a + The distinct command can be called on any sequence, a table, or called on a table with an index. """ @spec distinct(Q.reql_array, Q.reql_opts) :: Q.t operate_on_single_arg(:distinct, 42, opts: true) @doc """ - When called with values, returns `true` if a sequence contains all the specified - values. When called with predicate functions, returns `true` if for each - predicate there exists at least one element of the stream where that predicate + When called with values, returns `true` if a sequence contains all the specified + values. When called with predicate functions, returns `true` if for each + predicate there exists at least one element of the stream where that predicate returns `true`. """ @spec contains(Q.reql_array, Q.reql_array | Q.reql_func1 | Q.t) :: Q.t @@ -176,8 +176,8 @@ defmodule RethinkDB.Query do # @doc """ - `args` is a special term that’s used to splice an array of arguments into - another term. This is useful when you want to call a variadic term such as + `args` is a special term that’s used to splice an array of arguments into + another term. This is useful when you want to call a variadic term such as `get_all` with a set of arguments produced at runtime. This is analogous to Elixir's `apply`. @@ -188,36 +188,44 @@ defmodule RethinkDB.Query do @doc """ Encapsulate binary data within a query. + The type of data binary accepts depends on the client language. In + Elixir, it expects a Binary. Using a Binary object within a query implies + the use of binary and the ReQL driver will automatically perform the coercion. + + Binary objects returned to the client in Elixir will also be + Binary objects. This can be changed with the binary_format option :raw + to run to return “raw” objects. + Only a limited subset of ReQL commands may be chained after binary: * coerce_to can coerce binary objects to string types * count will return the number of bytes in the object - * slice will treat bytes like array indexes (i.e., slice(10,20) will return bytes + * slice will treat bytes like array indexes (i.e., slice(10,20) will return bytes * 10–19) * type_of returns PTYPE * info will return information on a binary object. """ @spec binary(Q.reql_binary) :: Q.t - def binary(%RethinkDB.Pseudotypes.Binary{data: data}), do: binary(data) - def binary(data), do: do_binary(%{"$reql_type$" => "BINARY", "data" => :base64.encode(data)}) - def do_binary(data), do: %Q{query: [155, [data]]} + def binary(%RethinkDB.Pseudotypes.Binary{data: data}), do: do_binary(data) + def binary(data), do: do_binary(:base64.encode(data)) + def do_binary(data), do: %Q{query: [155, [%{"$reql_type$" => "BINARY", "data" => data}]]} @doc """ - Call an anonymous function using return values from other ReQL commands or + Call an anonymous function using return values from other ReQL commands or queries as arguments. - The last argument to do (or, in some forms, the only argument) is an expression - or an anonymous function which receives values from either the previous - arguments or from prefixed commands chained before do. The do command is - essentially a single-element map, letting you map a function over just one - document. This allows you to bind a query result to a local variable within the - scope of do, letting you compute the result just once and reuse it in a complex + The last argument to do (or, in some forms, the only argument) is an expression + or an anonymous function which receives values from either the previous + arguments or from prefixed commands chained before do. The do command is + essentially a single-element map, letting you map a function over just one + document. This allows you to bind a query result to a local variable within the + scope of do, letting you compute the result just once and reuse it in a complex expression or in a series of ReQL commands. - Arguments passed to the do function must be basic data types, and cannot be - streams or selections. (Read about ReQL data types.) While the arguments will - all be evaluated before the function is executed, they may be evaluated in any - order, so their values should not be dependent on one another. The type of do’s + Arguments passed to the do function must be basic data types, and cannot be + streams or selections. (Read about ReQL data types.) While the arguments will + all be evaluated before the function is executed, they may be evaluated in any + order, so their values should not be dependent on one another. The type of do’s result is the type of the value returned from the function or last expression. """ @spec do_r(Q.reql_datum | Q.reql_func0, Q.reql_func1) :: Q.t @@ -227,7 +235,7 @@ defmodule RethinkDB.Query do def do_r(data, f) when is_function(f), do: %Q{query: [64, [wrap(f), wrap(data)]]} @doc """ - If the `test` expression returns False or None, the false_branch will be + If the `test` expression returns False or None, the false_branch will be evaluated. Otherwise, the true_branch will be evaluated. The branch command is effectively an if renamed due to language constraints. @@ -265,10 +273,10 @@ defmodule RethinkDB.Query do operate_on_single_arg(:error, 12) @doc """ - Handle non-existence errors. Tries to evaluate and return its first argument. - If an error related to the absence of a value is thrown in the process, or if - its first argument returns nil, returns its second argument. (Alternatively, - the second argument may be a function which will be called with either the text + Handle non-existence errors. Tries to evaluate and return its first argument. + If an error related to the absence of a value is thrown in the process, or if + its first argument returns nil, returns its second argument. (Alternatively, + the second argument may be a function which will be called with either the text of the non-existence error or nil.) """ @spec default(Q.t, Q.t) :: Q.t @@ -279,7 +287,7 @@ defmodule RethinkDB.Query do The only opt allowed is `timeout`. - `timeout` is the number of seconds before `js` times out. The default value + `timeout` is the number of seconds before `js` times out. The default value is 5 seconds. """ @spec js(Q.reql_string, Q.opts) :: Q.t @@ -322,8 +330,8 @@ defmodule RethinkDB.Query do operate_on_single_arg(:to_json, 172) @doc """ - Retrieve data from the specified URL over HTTP. The return type depends on - the result_format option, which checks the Content-Type of the response by + Retrieve data from the specified URL over HTTP. The return type depends on + the result_format option, which checks the Content-Type of the response by default. """ @spec http(Q.reql_string, Q.reql_opts) :: Q.t @@ -343,21 +351,21 @@ defmodule RethinkDB.Query do # @doc """ - Create a database. A RethinkDB database is a collection of tables, similar to + Create a database. A RethinkDB database is a collection of tables, similar to relational databases. If successful, the command returns an object with two fields: * dbs_created: always 1. - * config_changes: a list containing one object with two fields, old_val and + * config_changes: a list containing one object with two fields, old_val and new_val: * old_val: always null. * new_val: the database’s new config value. - If a database with the same name already exists, the command throws + If a database with the same name already exists, the command throws RqlRuntimeError. - Note: Only alphanumeric characters and underscores are valid for the database + Note: Only alphanumeric characters and underscores are valid for the database name. """ @spec db_create(Q.reql_string) :: Q.t @@ -390,58 +398,58 @@ defmodule RethinkDB.Query do # @doc """ - Construct a circular line or polygon. A circle in RethinkDB is a polygon or - line approximating a circle of a given radius around a given center, consisting + Construct a circular line or polygon. A circle in RethinkDB is a polygon or + line approximating a circle of a given radius around a given center, consisting of a specified number of vertices (default 32). - The center may be specified either by two floating point numbers, the latitude - (−90 to 90) and longitude (−180 to 180) of the point on a perfect sphere (see - Geospatial support for more information on ReQL’s coordinate system), or by a - point object. The radius is a floating point number whose units are meters by + The center may be specified either by two floating point numbers, the latitude + (−90 to 90) and longitude (−180 to 180) of the point on a perfect sphere (see + Geospatial support for more information on ReQL’s coordinate system), or by a + point object. The radius is a floating point number whose units are meters by default, although that may be changed with the unit argument. Optional arguments available with circle are: - num_vertices: the number of vertices in the polygon or line. Defaults to 32. - - geo_system: the reference ellipsoid to use for geographic coordinates. Possible - values are WGS84 (the default), a common standard for Earth’s geometry, or + - geo_system: the reference ellipsoid to use for geographic coordinates. Possible + values are WGS84 (the default), a common standard for Earth’s geometry, or unit_sphere, a perfect sphere of 1 meter radius. - - unit: Unit for the radius distance. Possible values are m (meter, the default), - km (kilometer), mi (international mile), nm (nautical mile), ft (international + - unit: Unit for the radius distance. Possible values are m (meter, the default), + km (kilometer), mi (international mile), nm (nautical mile), ft (international foot). - - fill: if `true` (the default) the circle is filled, creating a polygon; if `false` + - fill: if `true` (the default) the circle is filled, creating a polygon; if `false` the circle is unfilled (creating a line). """ @spec circle(Q.reql_geo, Q.reql_number, Q.reql_opts) :: Q.t operate_on_two_args(:circle, 165, opts: true) @doc """ - Compute the distance between a point and another geometry object. At least one + Compute the distance between a point and another geometry object. At least one of the geometry objects specified must be a point. Optional arguments available with distance are: - - geo_system: the reference ellipsoid to use for geographic coordinates. Possible - values are WGS84 (the default), a common standard for Earth’s geometry, or + - geo_system: the reference ellipsoid to use for geographic coordinates. Possible + values are WGS84 (the default), a common standard for Earth’s geometry, or unit_sphere, a perfect sphere of 1 meter radius. - - unit: Unit to return the distance in. Possible values are m (meter, the - default), km (kilometer), mi (international mile), nm (nautical mile), ft + - unit: Unit to return the distance in. Possible values are m (meter, the + default), km (kilometer), mi (international mile), nm (nautical mile), ft (international foot). - If one of the objects is a polygon or a line, the point will be projected onto - the line or polygon assuming a perfect sphere model before the distance is - computed (using the model specified with geo_system). As a consequence, if the - polygon or line is extremely large compared to Earth’s radius and the distance - is being computed with the default WGS84 model, the results of distance should - be considered approximate due to the deviation between the ellipsoid and + If one of the objects is a polygon or a line, the point will be projected onto + the line or polygon assuming a perfect sphere model before the distance is + computed (using the model specified with geo_system). As a consequence, if the + polygon or line is extremely large compared to Earth’s radius and the distance + is being computed with the default WGS84 model, the results of distance should + be considered approximate due to the deviation between the ellipsoid and spherical models. """ @spec distance(Q.reql_geo, Q.reql_geo, Q.reql_opts) :: Q.t operate_on_two_args(:distance, 162, opts: true) @doc """ - Convert a Line object into a Polygon object. If the last point does not - specify the same coordinates as the first point, polygon will close the polygon + Convert a Line object into a Polygon object. If the last point does not + specify the same coordinates as the first point, polygon will close the polygon by connecting them. """ @spec fill(Q.reql_line) :: Q.t @@ -450,13 +458,13 @@ defmodule RethinkDB.Query do @doc """ Convert a GeoJSON object to a ReQL geometry object. - RethinkDB only allows conversion of GeoJSON objects which have ReQL - equivalents: Point, LineString, and Polygon. MultiPoint, MultiLineString, and - MultiPolygon are not supported. (You could, however, store multiple points, + RethinkDB only allows conversion of GeoJSON objects which have ReQL + equivalents: Point, LineString, and Polygon. MultiPoint, MultiLineString, and + MultiPolygon are not supported. (You could, however, store multiple points, lines and polygons in an array and use a geospatial multi index with them.) - Only longitude/latitude coordinates are supported. GeoJSON objects that use - Cartesian coordinates, specify an altitude, or specify their own coordinate + Only longitude/latitude coordinates are supported. GeoJSON objects that use + Cartesian coordinates, specify an altitude, or specify their own coordinate reference system will be rejected. """ @spec geojson(Q.reql_obj) :: Q.t @@ -469,36 +477,36 @@ defmodule RethinkDB.Query do operate_on_single_arg(:to_geojson, 158) @doc """ - Get all documents where the given geometry object intersects the geometry + Get all documents where the given geometry object intersects the geometry object of the requested geospatial index. - The index argument is mandatory. This command returns the same results as - `filter(r.row('index')) |> intersects(geometry)`. The total number of results - is limited to the array size limit which defaults to 100,000, but can be + The index argument is mandatory. This command returns the same results as + `filter(r.row('index')) |> intersects(geometry)`. The total number of results + is limited to the array size limit which defaults to 100,000, but can be changed with the `array_limit` option to run. """ @spec get_intersecting(Q.reql_array, Q.reql_geo, Q.reql_opts) :: Q.t operate_on_two_args(:get_intersecting, 166, opts: true) @doc """ - Get all documents where the specified geospatial index is within a certain + Get all documents where the specified geospatial index is within a certain distance of the specified point (default 100 kilometers). The index argument is mandatory. Optional arguments are: * max_results: the maximum number of results to return (default 100). - * unit: Unit for the distance. Possible values are m (meter, the default), km - (kilometer), mi (international mile), nm (nautical mile), ft (international + * unit: Unit for the distance. Possible values are m (meter, the default), km + (kilometer), mi (international mile), nm (nautical mile), ft (international foot). - * max_dist: the maximum distance from an object to the specified point (default + * max_dist: the maximum distance from an object to the specified point (default 100 km). - * geo_system: the reference ellipsoid to use for geographic coordinates. Possible - values are WGS84 (the default), a common standard for Earth’s geometry, or + * geo_system: the reference ellipsoid to use for geographic coordinates. Possible + values are WGS84 (the default), a common standard for Earth’s geometry, or unit_sphere, a perfect sphere of 1 meter radius. - The return value will be an array of two-item objects with the keys dist and - doc, set to the distance between the specified point and the document (in the - units specified with unit, defaulting to meters) and the document itself, + The return value will be an array of two-item objects with the keys dist and + doc, set to the distance between the specified point and the document (in the + units specified with unit, defaulting to meters) and the document itself, respectively. """ @@ -506,26 +514,26 @@ defmodule RethinkDB.Query do operate_on_two_args(:get_nearest, 168, opts: true) @doc """ - Tests whether a geometry object is completely contained within another. When - applied to a sequence of geometry objects, includes acts as a filter, returning + Tests whether a geometry object is completely contained within another. When + applied to a sequence of geometry objects, includes acts as a filter, returning a sequence of objects from the sequence that include the argument. """ @spec includes(Q.reql_geo, Q.reql_geo) :: Q.t operate_on_two_args(:includes, 164) @doc """ - Tests whether two geometry objects intersect with one another. When applied to - a sequence of geometry objects, intersects acts as a filter, returning a + Tests whether two geometry objects intersect with one another. When applied to + a sequence of geometry objects, intersects acts as a filter, returning a sequence of objects from the sequence that intersect with the argument. """ @spec intersects(Q.reql_geo, Q.reql_geo) :: Q.t operate_on_two_args(:intersects, 163) @doc """ - Construct a geometry object of type Line. The line can be specified in one of + Construct a geometry object of type Line. The line can be specified in one of two ways: - - Two or more two-item arrays, specifying latitude and longitude numbers of the + - Two or more two-item arrays, specifying latitude and longitude numbers of the line’s vertices; - Two or more Point objects specifying the line’s vertices. """ @@ -533,8 +541,8 @@ defmodule RethinkDB.Query do operate_on_list(:line, 160) @doc """ - Construct a geometry object of type Point. The point is specified by two - floating point numbers, the longitude (−180 to 180) and latitude (−90 to 90) of + Construct a geometry object of type Point. The point is specified by two + floating point numbers, the longitude (−180 to 180) and latitude (−90 to 90) of the point on a perfect sphere. """ @spec point(Q.reql_geo) :: Q.t @@ -542,27 +550,27 @@ defmodule RethinkDB.Query do operate_on_two_args(:point, 159) @doc """ - Construct a geometry object of type Polygon. The Polygon can be specified in + Construct a geometry object of type Polygon. The Polygon can be specified in one of two ways: - Three or more two-item arrays, specifying latitude and longitude numbers of the + Three or more two-item arrays, specifying latitude and longitude numbers of the polygon’s vertices; * Three or more Point objects specifying the polygon’s vertices. - * Longitude (−180 to 180) and latitude (−90 to 90) of vertices are plotted on a - perfect sphere. See Geospatial support for more information on ReQL’s + * Longitude (−180 to 180) and latitude (−90 to 90) of vertices are plotted on a + perfect sphere. See Geospatial support for more information on ReQL’s coordinate system. - If the last point does not specify the same coordinates as the first point, - polygon will close the polygon by connecting them. You cannot directly - construct a polygon with holes in it using polygon, but you can use polygon_sub + If the last point does not specify the same coordinates as the first point, + polygon will close the polygon by connecting them. You cannot directly + construct a polygon with holes in it using polygon, but you can use polygon_sub to use a second polygon within the interior of the first to define a hole. """ @spec polygon([Q.reql_geo]) :: Q.t operate_on_list(:polygon, 161) @doc """ - Use polygon2 to “punch out” a hole in polygon1. polygon2 must be completely - contained within polygon1 and must have no holes itself (it must not be the + Use polygon2 to “punch out” a hole in polygon1. polygon2 must be completely + contained within polygon1 and must have no holes itself (it must not be the output of polygon_sub itself). """ @spec polygon_sub(Q.reql_geo, Q.reql_geo) :: Q.t @@ -628,7 +636,7 @@ defmodule RethinkDB.Query do ) |> run """ - @spec eq_join(Q.reql_array, Q.reql_string, Q.reql_array, %{}) :: Q.t + @spec eq_join(Q.reql_array, Q.reql_string, Q.reql_array, Keyword.t) :: Q.t operate_on_three_args(:eq_join, 50, opts: true) @doc """ @@ -747,7 +755,7 @@ defmodule RethinkDB.Query do @spec mod(Q.reql_number, Q.reql_number) :: Q.t operate_on_two_args(:mod, 28) - @doc """ + @doc """ Compute the logical “and” of two values. iex> and(true, true) |> run conn @@ -758,7 +766,7 @@ defmodule RethinkDB.Query do """ @spec and_r(Q.reql_bool, Q.reql_bool) :: Q.t operate_on_two_args(:and_r, 67) - @doc """ + @doc """ Compute the logical “and” of all values in a list. iex> and_r([true, true, true]) |> run conn @@ -817,7 +825,7 @@ defmodule RethinkDB.Query do """ @spec eq([Q.reql_datum]) :: Q.t operate_on_list(:eq, 17) - + @doc """ Test if two values are not equal. @@ -1015,17 +1023,17 @@ defmodule RethinkDB.Query do operate_on_single_arg(:db, 14) @doc """ - Return all documents in a table. Other commands may be chained after table to - return a subset of documents (such as get and filter) or perform further + Return all documents in a table. Other commands may be chained after table to + return a subset of documents (such as get and filter) or perform further processing. There are two optional arguments. - * useOutdated: if true, this allows potentially out-of-date data to be returned, - with potentially faster reads. It also allows you to perform reads from a + * useOutdated: if true, this allows potentially out-of-date data to be returned, + with potentially faster reads. It also allows you to perform reads from a secondary replica if a primary has failed. Default false. - * identifierFormat: possible values are name and uuid, with a default of name. If - set to uuid, then system tables will refer to servers, databases and tables by + * identifierFormat: possible values are name and uuid, with a default of name. If + set to uuid, then system tables will refer to servers, databases and tables by UUID rather than name. (This only has an effect when used with system tables.) """ @spec table(Q.reql_string, Q.reql_opts) :: Q.t @@ -1049,11 +1057,11 @@ defmodule RethinkDB.Query do operate_on_two_args(:get_all, 78, opts: true) @doc """ - Get all documents between two keys. Accepts three optional arguments: index, - left_bound, and right_bound. If index is set to the name of a secondary index, - between will return all documents where that index’s value is in the specified - range (it uses the primary key by default). left_bound or right_bound may be - set to open or closed to indicate whether or not to include that endpoint of + Get all documents between two keys. Accepts three optional arguments: index, + left_bound, and right_bound. If index is set to the name of a secondary index, + between will return all documents where that index’s value is in the specified + range (it uses the primary key by default). left_bound or right_bound may be + set to open or closed to indicate whether or not to include that endpoint of the range (by default, left_bound is closed and right_bound is open). """ @spec between(Q.reql_array, Q.t, Q.t) :: Q.t @@ -1062,15 +1070,15 @@ defmodule RethinkDB.Query do @doc """ Get all the documents for which the given predicate is true. - filter can be called on a sequence, selection, or a field containing an array - of elements. The return type is the same as the type on which the function was + filter can be called on a sequence, selection, or a field containing an array + of elements. The return type is the same as the type on which the function was called on. - The body of every filter is wrapped in an implicit .default(False), which means - that if a non-existence errors is thrown (when you try to access a field that - does not exist in a document), RethinkDB will just ignore the document. The - default value can be changed by passing the named argument default. Setting - this optional argument to r.error() will cause any non-existence errors to + The body of every filter is wrapped in an implicit .default(False), which means + that if a non-existence errors is thrown (when you try to access a field that + does not exist in a document), RethinkDB will just ignore the document. The + default value can be changed by passing the named argument default. Setting + this optional argument to r.error() will cause any non-existence errors to return a RqlRuntimeError. """ @spec filter(Q.reql_array, Q.t) :: Q.t @@ -1081,7 +1089,7 @@ defmodule RethinkDB.Query do # @doc """ - Checks a string for matches. + Checks a string for matches. Example: @@ -1113,10 +1121,10 @@ defmodule RethinkDB.Query do @doc """ Split a `string` with a given `separator` into `max_result` segments. - + iex> "a-bra-ca-da-bra" |> split("-", 2) |> run conn %RethinkDB.Record{data: ["a", "bra", "ca-da-bra"]} - + """ @spec split(Q.reql_string, (Q.reql_string|nil), integer) :: Q.t operate_on_three_args(:split, 149) @@ -1155,7 +1163,7 @@ defmodule RethinkDB.Query do * old_val: always nil. * new_val: the table’s new config value. - If a table with the same name already exists, the command throws + If a table with the same name already exists, the command throws RqlRuntimeError. Note: Only alphanumeric characters and underscores are valid for the table name. @@ -1163,21 +1171,21 @@ defmodule RethinkDB.Query do When creating a table you can specify the following options: * primary_key: the name of the primary key. The default primary key is id. - * durability: if set to soft, writes will be acknowledged by the server - immediately and flushed to disk in the background. The default is hard: + * durability: if set to soft, writes will be acknowledged by the server + immediately and flushed to disk in the background. The default is hard: acknowledgment of writes happens after data has been written to disk. * shards: the number of shards, an integer from 1-32. Defaults to 1. * replicas: either an integer or a mapping object. Defaults to 1. - If replicas is an integer, it specifies the number of replicas per shard. + If replicas is an integer, it specifies the number of replicas per shard. Specifying more replicas than there are servers will return an error. - If replicas is an object, it specifies key-value pairs of server tags and the - number of replicas to assign to those servers: {:tag1 => 2, :tag2 => 4, :tag3 + If replicas is an object, it specifies key-value pairs of server tags and the + number of replicas to assign to those servers: {:tag1 => 2, :tag2 => 4, :tag3 => 2, ...}. - * primary_replica_tag: the primary server specified by its server tag. Required - if replicas is an object; the tag must be in the object. This must not be + * primary_replica_tag: the primary server specified by its server tag. Required + if replicas is an object; the tag must be in the object. This must not be specified if replicas is an integer. - The data type of a primary key is usually a string (like a UUID) or a number, - but it can also be a time, binary object, boolean or an array. It cannot be an + The data type of a primary key is usually a string (like a UUID) or a number, + but it can also be a time, binary object, boolean or an array. It cannot be an object. """ @spec table_create(Q.t, Q.reql_string, Q.reql_opts) :: Q.t @@ -1208,9 +1216,9 @@ defmodule RethinkDB.Query do operate_on_single_arg(:table_list, 62) @doc """ - Create a new secondary index on a table. Secondary indexes improve the speed of - many read queries at the slight cost of increased storage space and decreased - write performance. For more information about secondary indexes, read the + Create a new secondary index on a table. Secondary indexes improve the speed of + many read queries at the slight cost of increased storage space and decreased + write performance. For more information about secondary indexes, read the article “Using secondary indexes in RethinkDB.” RethinkDB supports different types of secondary indexes: @@ -1218,15 +1226,15 @@ defmodule RethinkDB.Query do * Simple indexes based on the value of a single field. * Compound indexes based on multiple fields. * Multi indexes based on arrays of values. - * Geospatial indexes based on indexes of geometry objects, created when the geo + * Geospatial indexes based on indexes of geometry objects, created when the geo optional argument is true. * Indexes based on arbitrary expressions. - The index_function can be an anonymous function or a binary representation + The index_function can be an anonymous function or a binary representation obtained from the function field of index_status. - If successful, create_index will return an object of the form {:created => 1}. - If an index by that name already exists on the table, a RqlRuntimeError will be + If successful, create_index will return an object of the form {:created => 1}. + If an index by that name already exists on the table, a RqlRuntimeError will be thrown. """ @spec index_create(Q.t, Q.reql_string, Q.reql_func1, Q.reql_opts) :: Q.t @@ -1246,23 +1254,23 @@ defmodule RethinkDB.Query do operate_on_single_arg(:index_list, 77) @doc """ - Rename an existing secondary index on a table. If the optional argument - overwrite is specified as true, a previously existing index with the new name - will be deleted and the index will be renamed. If overwrite is false (the + Rename an existing secondary index on a table. If the optional argument + overwrite is specified as true, a previously existing index with the new name + will be deleted and the index will be renamed. If overwrite is false (the default) an error will be raised if the new index name already exists. - The return value on success will be an object of the format {:renamed => 1}, or + The return value on success will be an object of the format {:renamed => 1}, or {:renamed => 0} if the old and new names are the same. - An error will be raised if the old index name does not exist, if the new index - name is already in use and overwrite is false, or if either the old or new + An error will be raised if the old index name does not exist, if the new index + name is already in use and overwrite is false, or if either the old or new index name are the same as the primary key field name. """ @spec index_rename(Q.t, Q.reql_string, Q.reql_string, Q.reql_opts) :: Q.t operate_on_three_args(:index_rename, 156, opts: true) @doc """ - Get the status of the specified indexes on this table, or the status of all + Get the status of the specified indexes on this table, or the status of all indexes on this table if no indexes are specified. """ @spec index_status(Q.t, Q.reql_string|Q.reql_array) :: Q.t @@ -1271,7 +1279,7 @@ defmodule RethinkDB.Query do operate_on_two_args(:index_status, 139) @doc """ - Wait for the specified indexes on this table to be ready, or for all indexes on + Wait for the specified indexes on this table to be ready, or for all indexes on this table to be ready if no indexes are specified. """ @spec index_wait(Q.t, Q.reql_string|Q.reql_array) :: Q.t @@ -1284,116 +1292,118 @@ defmodule RethinkDB.Query do # @doc """ - Insert documents into a table. Accepts a single document or an array of + Insert documents into a table. Accepts a single document or an array of documents. The optional arguments are: - * durability: possible values are hard and soft. This option will override the - table or query’s durability setting (set in run). In soft durability mode - Rethink_dB will acknowledge the write immediately after receiving and caching + * durability: possible values are hard and soft. This option will override the + table or query’s durability setting (set in run). In soft durability mode + Rethink_dB will acknowledge the write immediately after receiving and caching it, but before the write has been committed to disk. - * return_changes: if set to True, return a changes array consisting of + * return_changes: if set to True, return a changes array consisting of old_val/new_val objects describing the changes made. - * conflict: Determine handling of inserting documents with the same primary key + * conflict: Determine handling of inserting documents with the same primary key as existing entries. Possible values are "error", "replace" or "update". - * "error": Do not insert the new document and record the conflict as an error. + * "error": Do not insert the new document and record the conflict as an error. This is the default. * "replace": Replace the old document in its entirety with the new one. * "update": Update fields of the old document with fields from the new one. - + * `lambda(id, old_doc, new_doc) :: resolved_doc`: a function that receives the + id, old and new documents as arguments and returns a document which will be + inserted in place of the conflicted one. Insert returns an object that contains the following attributes: * inserted: the number of documents successfully inserted. - * replaced: the number of documents updated when conflict is set to "replace" or + * replaced: the number of documents updated when conflict is set to "replace" or "update". - * unchanged: the number of documents whose fields are identical to existing - documents with the same primary key when conflict is set to "replace" or + * unchanged: the number of documents whose fields are identical to existing + documents with the same primary key when conflict is set to "replace" or "update". * errors: the number of errors encountered while performing the insert. * first_error: If errors were encountered, contains the text of the first error. * deleted and skipped: 0 for an insert operation. - * generated_keys: a list of generated primary keys for inserted documents whose + * generated_keys: a list of generated primary keys for inserted documents whose primary keys were not specified (capped to 100,000). - * warnings: if the field generated_keys is truncated, you will get the warning + * warnings: if the field generated_keys is truncated, you will get the warning “Too many generated keys (), array truncated to 100000.”. - * changes: if return_changes is set to True, this will be an array of objects, - one for each objected affected by the insert operation. Each object will have + * changes: if return_changes is set to True, this will be an array of objects, + one for each objected affected by the insert operation. Each object will have * two keys: {"new_val": , "old_val": None}. """ - @spec insert(Q.t, Q.reql_obj | Q.reql_array, %{}) :: Q.t + @spec insert(Q.t, Q.reql_obj | Q.reql_array, Keyword.t) :: Q.t operate_on_two_args(:insert, 56, opts: true) @doc """ - Update JSON documents in a table. Accepts a JSON document, a ReQL expression, + Update JSON documents in a table. Accepts a JSON document, a ReQL expression, or a combination of the two. The optional arguments are: - * durability: possible values are hard and soft. This option will override the - table or query’s durability setting (set in run). In soft durability mode - RethinkDB will acknowledge the write immediately after receiving it, but before + * durability: possible values are hard and soft. This option will override the + table or query’s durability setting (set in run). In soft durability mode + RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. - * return_changes: if set to True, return a changes array consisting of + * return_changes: if set to True, return a changes array consisting of old_val/new_val objects describing the changes made. - * non_atomic: if set to True, executes the update and distributes the result to - replicas in a non-atomic fashion. This flag is required to perform - non-deterministic updates, such as those that require reading data from another + * non_atomic: if set to True, executes the update and distributes the result to + replicas in a non-atomic fashion. This flag is required to perform + non-deterministic updates, such as those that require reading data from another table. Update returns an object that contains the following attributes: * replaced: the number of documents that were updated. - * unchanged: the number of documents that would have been modified except the new + * unchanged: the number of documents that would have been modified except the new value was the same as the old value. - * skipped: the number of documents that were skipped because the document didn’t + * skipped: the number of documents that were skipped because the document didn’t exist. * errors: the number of errors encountered while performing the update. * first_error: If errors were encountered, contains the text of the first error. * deleted and inserted: 0 for an update operation. - * changes: if return_changes is set to True, this will be an array of objects, - one for each objected affected by the update operation. Each object will have + * changes: if return_changes is set to True, this will be an array of objects, + one for each objected affected by the update operation. Each object will have * two keys: {"new_val": , "old_val": }. """ - @spec update(Q.t, Q.reql_obj, %{}) :: Q.t + @spec update(Q.t, Q.reql_obj, Keyword.t) :: Q.t operate_on_two_args(:update, 53, opts: true) @doc """ - Replace documents in a table. Accepts a JSON document or a ReQL expression, and - replaces the original document with the new one. The new document must have the + Replace documents in a table. Accepts a JSON document or a ReQL expression, and + replaces the original document with the new one. The new document must have the same primary key as the original document. The optional arguments are: - * durability: possible values are hard and soft. This option will override the + * durability: possible values are hard and soft. This option will override the table or query’s durability setting (set in run). - In soft durability mode RethinkDB will acknowledge the write immediately after + In soft durability mode RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. - * return_changes: if set to True, return a changes array consisting of + * return_changes: if set to True, return a changes array consisting of old_val/new_val objects describing the changes made. - * non_atomic: if set to True, executes the replacement and distributes the result - to replicas in a non-atomic fashion. This flag is required to perform - non-deterministic updates, such as those that require reading data from another + * non_atomic: if set to True, executes the replacement and distributes the result + to replicas in a non-atomic fashion. This flag is required to perform + non-deterministic updates, such as those that require reading data from another table. Replace returns an object that contains the following attributes: * replaced: the number of documents that were replaced - * unchanged: the number of documents that would have been modified, except that + * unchanged: the number of documents that would have been modified, except that the new value was the same as the old value - * inserted: the number of new documents added. You can have new documents - inserted if you do a point-replace on a key that isn’t in the table or you do a - replace on a selection and one of the documents you are replacing has been + * inserted: the number of new documents added. You can have new documents + inserted if you do a point-replace on a key that isn’t in the table or you do a + replace on a selection and one of the documents you are replacing has been deleted * deleted: the number of deleted documents when doing a replace with None * errors: the number of errors encountered while performing the replace. * first_error: If errors were encountered, contains the text of the first error. * skipped: 0 for a replace operation - * changes: if return_changes is set to True, this will be an array of objects, - one for each objected affected by the replace operation. Each object will have + * changes: if return_changes is set to True, this will be an array of objects, + one for each objected affected by the replace operation. Each object will have * two keys: {"new_val": , "old_val": }. """ - @spec replace(Q.t, Q.reql_obj, %{}) :: Q.t + @spec replace(Q.t, Q.reql_obj, Keyword.t) :: Q.t operate_on_two_args(:replace, 55, opts: true) @doc """ @@ -1401,34 +1411,34 @@ defmodule RethinkDB.Query do The optional arguments are: - * durability: possible values are hard and soft. This option will override the + * durability: possible values are hard and soft. This option will override the table or query’s durability setting (set in run). - In soft durability mode RethinkDB will acknowledge the write immediately after + In soft durability mode RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. - * return_changes: if set to True, return a changes array consisting of + * return_changes: if set to True, return a changes array consisting of old_val/new_val objects describing the changes made. Delete returns an object that contains the following attributes: * deleted: the number of documents that were deleted. * skipped: the number of documents that were skipped. - For example, if you attempt to delete a batch of documents, and another - concurrent query deletes some of those documents first, they will be counted as + For example, if you attempt to delete a batch of documents, and another + concurrent query deletes some of those documents first, they will be counted as skipped. * errors: the number of errors encountered while performing the delete. * first_error: If errors were encountered, contains the text of the first error. inserted, replaced, and unchanged: all 0 for a delete operation. - * changes: if return_changes is set to True, this will be an array of objects, - one for each objected affected by the delete operation. Each object will have + * changes: if return_changes is set to True, this will be an array of objects, + one for each objected affected by the delete operation. Each object will have * two keys: {"new_val": None, "old_val": }. """ @spec delete(Q.t) :: Q.t operate_on_single_arg(:delete, 54, opts: true) @doc """ - sync ensures that writes on a given table are written to permanent storage. - Queries that specify soft durability (durability='soft') do not give such - guarantees, so sync can be used to ensure the state of these queries. A call to + sync ensures that writes on a given table are written to permanent storage. + Queries that specify soft durability (durability='soft') do not give such + guarantees, so sync can be used to ensure the state of these queries. A call to sync does not return until all previous writes to the table are persisted. If successful, the operation returns an object: {"synced": 1}. @@ -1442,8 +1452,8 @@ defmodule RethinkDB.Query do # @doc """ - Return a time object representing the current time in UTC. The command now() is - computed once when the server receives the query, so multiple instances of + Return a time object representing the current time in UTC. The command now() is + computed once when the server receives the query, so multiple instances of r.now() will always return the same time inside a query. """ @spec now() :: Q.t @@ -1459,7 +1469,7 @@ defmodule RethinkDB.Query do * day is an integer between 1 and 31. * hour is an integer. * minutes is an integer. - * seconds is a double. Its value will be rounded to three decimal places + * seconds is a double. Its value will be rounded to three decimal places (millisecond-precision). * timezone can be 'Z' (for UTC) or a string with the format ±[hh]:[mm]. """ @@ -1471,25 +1481,25 @@ defmodule RethinkDB.Query do end @doc """ - Create a time object based on seconds since epoch. The first argument is a + Create a time object based on seconds since epoch. The first argument is a double and will be rounded to three decimal places (millisecond-precision). """ @spec epoch_time(reql_number) :: Q.t - operate_on_single_arg(:epoch_time, 101) + operate_on_single_arg(:epoch_time, 101) @doc """ - Create a time object based on an ISO 8601 date-time string (e.g. - ‘2013-01-01T01:01:01+00:00’). We support all valid ISO 8601 formats except for - week dates. If you pass an ISO 8601 date-time without a time zone, you must + Create a time object based on an ISO 8601 date-time string (e.g. + ‘2013-01-01T01:01:01+00:00’). We support all valid ISO 8601 formats except for + week dates. If you pass an ISO 8601 date-time without a time zone, you must specify the time zone with the default_timezone argument. """ @spec iso8601(reql_string) :: Q.t operate_on_single_arg(:iso8601, 99, opts: true) @doc """ - Return a new time object with a different timezone. While the time stays the - same, the results returned by methods such as hours() will change since they - take the timezone into account. The timezone argument has to be of the ISO 8601 + Return a new time object with a different timezone. While the time stays the + same, the results returned by methods such as hours() will change since they + take the timezone into account. The timezone argument has to be of the ISO 8601 format. """ @spec in_timezone(Q.reql_time, Q.reql_string) :: Q.t @@ -1502,21 +1512,21 @@ defmodule RethinkDB.Query do operate_on_single_arg(:timezone, 127) @doc """ - Return if a time is between two other times (by default, inclusive for the + Return if a time is between two other times (by default, inclusive for the start, exclusive for the end). """ @spec during(Q.reql_time, Q.reql_time, Q.reql_time) :: Q.t operate_on_three_args(:during, 105, opts: true) @doc """ - Return a new time object only based on the day, month and year (ie. the same + Return a new time object only based on the day, month and year (ie. the same day at 00:00). """ @spec date(Q.reql_time) :: Q.t operate_on_single_arg(:date, 106) @doc """ - Return the number of seconds elapsed since the beginning of the day stored in + Return the number of seconds elapsed since the beginning of the day stored in the time object. """ @spec time_of_day(Q.reql_time) :: Q.t @@ -1541,14 +1551,14 @@ defmodule RethinkDB.Query do operate_on_single_arg(:day, 130) @doc """ - Return the day of week of a time object as a number between 1 and 7 (following + Return the day of week of a time object as a number between 1 and 7 (following ISO 8601 standard). """ @spec day_of_week(Q.reql_time) :: Q.t operate_on_single_arg(:day_of_week, 131) @doc """ - Return the day of the year of a time object as a number between 1 and 366 + Return the day of the year of a time object as a number between 1 and 366 (following ISO 8601 standard). """ @spec day_of_year(Q.reql_time) :: Q.t @@ -1589,20 +1599,20 @@ defmodule RethinkDB.Query do # @doc """ - Transform each element of one or more sequences by applying a mapping function - to them. If map is run with two or more sequences, it will iterate for as many + Transform each element of one or more sequences by applying a mapping function + to them. If map is run with two or more sequences, it will iterate for as many items as there are in the shortest sequence. - Note that map can only be applied to sequences, not single values. If you wish - to apply a function to a single value/selection (including an array), use the + Note that map can only be applied to sequences, not single values. If you wish + to apply a function to a single value/selection (including an array), use the do command. """ @spec map(Q.reql_array, Q.reql_func1) :: Q.t operate_on_two_args(:map, 38) @doc """ - Plucks one or more attributes from a sequence of objects, filtering out any - objects in the sequence that do not have the specified fields. Functionally, + Plucks one or more attributes from a sequence of objects, filtering out any + objects in the sequence that do not have the specified fields. Functionally, this is identical to has_fields followed by pluck on a sequence. """ @spec with_fields(Q.reql_array, Q.reql_array) :: Q.t @@ -1614,15 +1624,15 @@ defmodule RethinkDB.Query do @spec flat_map(Q.reql_array, Q.reql_func1) :: Q.t operate_on_two_args(:flat_map, 40) operate_on_two_args(:concat_map, 40) - + @doc """ - Sort the sequence by document values of the given key(s). To specify the - ordering, wrap the attribute with either r.asc or r.desc (defaults to + Sort the sequence by document values of the given key(s). To specify the + ordering, wrap the attribute with either r.asc or r.desc (defaults to ascending). - Sorting without an index requires the server to hold the sequence in memory, - and is limited to 100,000 documents (or the setting of the array_limit option - for run). Sorting with an index can be done on arbitrarily large tables, or + Sorting without an index requires the server to hold the sequence in memory, + and is limited to 100,000 documents (or the setting of the array_limit option + for run). Sorting with an index can be done on arbitrarily large tables, or after a between command using the same index. """ @spec order_by(Q.reql_array, Q.reql_datum) :: Q.t @@ -1648,14 +1658,14 @@ defmodule RethinkDB.Query do operate_on_three_args(:slice, 30, opts: true) @doc """ - Get the nth element of a sequence, counting from zero. If the argument is + Get the nth element of a sequence, counting from zero. If the argument is negative, count from the last element. """ @spec nth(Q.reql_array, Q.reql_number) :: Q.t operate_on_two_args(:nth, 45) @doc """ - Get the indexes of an element in a sequence. If the argument is a predicate, + Get the indexes of an element in a sequence. If the argument is a predicate, get the indexes of all elements matching it. """ @spec offsets_of(Q.reql_array, Q.reql_datum) :: Q.t @@ -1674,11 +1684,11 @@ defmodule RethinkDB.Query do operate_on_two_args(:union, 44) @doc """ - Select a given number of elements from a sequence with uniform random + Select a given number of elements from a sequence with uniform random distribution. Selection is done without replacement. - If the sequence has less than the requested number of elements (i.e., calling - sample(10) on a sequence with only five elements), sample will return the + If the sequence has less than the requested number of elements (i.e., calling + sample(10) on a sequence with only five elements), sample will return the entire sequence in a random order. """ @spec sample(Q.reql_array, Q.reql_number) :: Q.t @@ -1689,26 +1699,28 @@ defmodule RethinkDB.Query do # @doc """ - Plucks out one or more attributes from either an object or a sequence of + Plucks out one or more attributes from either an object or a sequence of objects (projection). """ @spec pluck(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t operate_on_two_args(:pluck, 33) @doc """ - The opposite of pluck; takes an object or a sequence of objects, and returns + The opposite of pluck; takes an object or a sequence of objects, and returns them with the specified paths removed. """ @spec without(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t operate_on_two_args(:without, 34) @doc """ - Merge two or more objects together to construct a new object with properties - from all. When there is a conflict between field names, preference is given to + Merge two or more objects together to construct a new object with properties + from all. When there is a conflict between field names, preference is given to fields in the rightmost object in the argument list. """ @spec merge(Q.reql_array, Q.reql_object|Q.reql_func1) :: Q.t operate_on_two_args(:merge, 35) + operate_on_list(:merge, 35) + operate_on_single_arg(:merge, 35) @doc """ Append a value to an array. @@ -1735,36 +1747,36 @@ defmodule RethinkDB.Query do operate_on_two_args(:set_insert, 88) @doc """ - Intersect two arrays returning values that occur in both of them as a set (an + Intersect two arrays returning values that occur in both of them as a set (an array with distinct values). """ @spec set_intersection(Q.reql_array, Q.reql_datum) :: Q.t operate_on_two_args(:set_intersection, 89) @doc """ - Add a several values to an array and return it as a set (an array with distinct + Add a several values to an array and return it as a set (an array with distinct values). """ @spec set_union(Q.reql_array, Q.reql_datum) :: Q.t operate_on_two_args(:set_union, 90) @doc """ - Remove the elements of one array from another and return them as a set (an + Remove the elements of one array from another and return them as a set (an array with distinct values). """ @spec set_difference(Q.reql_array, Q.reql_datum) :: Q.t operate_on_two_args(:set_difference, 91) @doc """ - Get a single field from an object. If called on a sequence, gets that field + Get a single field from an object. If called on a sequence, gets that field from every object in the sequence, skipping objects that lack it. """ @spec get_field(Q.reql_obj|Q.reql_array, Q.reql_string) :: Q.t operate_on_two_args(:get_field, 31) @doc """ - Test if an object has one or more fields. An object has a field if it has - that key and the key has a non-null value. For instance, the object {'a': + Test if an object has one or more fields. An object has a field if it has + that key and the key has a non-null value. For instance, the object {'a': 1,'b': 2,'c': null} has the fields a and b. """ @spec has_fields(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t @@ -1808,15 +1820,15 @@ defmodule RethinkDB.Query do operate_on_single_arg(:values, 186) @doc """ - Replace an object in a field instead of merging it with an existing object in a + Replace an object in a field instead of merging it with an existing object in a merge or update operation. """ @spec literal(Q.reql_object) :: Q.t operate_on_single_arg(:literal, 137) @doc """ - Creates an object from a list of key-value pairs, where the keys must be - strings. r.object(A, B, C, D) is equivalent to r.expr([[A, B], [C, + Creates an object from a list of key-value pairs, where the keys must be + strings. r.object(A, B, C, D) is equivalent to r.expr([[A, B], [C, D]]).coerce_to('OBJECT'). """ @spec object(Q.reql_array) :: Q.t @@ -1873,4 +1885,3 @@ defmodule RethinkDB.Query do operate_on_zero_args(:minval, 180) operate_on_zero_args(:maxval, 181) end - diff --git a/lib/rethinkdb/query/macros.ex b/lib/rethinkdb/query/macros.ex index 21a9d81..0596832 100644 --- a/lib/rethinkdb/query/macros.ex +++ b/lib/rethinkdb/query/macros.ex @@ -106,6 +106,22 @@ defmodule RethinkDB.Query.Macros do m = Map.from_struct(t) |> Map.put_new("$reql_type$", "TIME") wrap(m) end + def wrap(t = %DateTime{utc_offset: utc_offset, std_offset: std_offset}) do + offset = utc_offset + std_offset + offset_negative = offset < 0 + offset_hour = div(abs(offset), 3600) + offset_minute = rem(abs(offset), 3600) + time_zone = + if offset_negative do "-" else "+" end <> + String.pad_leading(Integer.to_string(offset_hour), 2, "0") <> + ":" <> + String.pad_leading(Integer.to_string(offset_minute), 2, "0") + wrap(%{ + "$reql_type$" => "TIME", + "epoch_time" => DateTime.to_unix(t, :milliseconds) / 1000, + "timezone" => time_zone + }) + end def wrap(map) when is_map(map) do Enum.map(map, fn {k,v} -> {k, wrap(v)} @@ -115,6 +131,6 @@ defmodule RethinkDB.Query.Macros do def wrap(t) when is_tuple(t), do: wrap(Tuple.to_list(t)) def wrap(data), do: data - def make_opts(opts) when is_map(opts), do: opts + def make_opts(opts) when is_map(opts), do: wrap(opts) def make_opts(opts) when is_list(opts), do: Enum.into(opts, %{}) end diff --git a/lib/rethinkdb/query/term_info.json b/lib/rethinkdb/query/term_info.json new file mode 100644 index 0000000..4884c83 --- /dev/null +++ b/lib/rethinkdb/query/term_info.json @@ -0,0 +1 @@ +{"has_fields":32,"db_list":59,"funcall":64,"random":151,"map":38,"to_geojson":158,"index_create":75,"ungroup":150,"get_intersecting":166,"get_all":78,"saturday":112,"error":12,"or":66,"reconfigure":176,"gt":21,"day":130,"september":122,"desc":74,"min":147,"nth":45,"splice_at":85,"make_array":2,"info":79,"mod":28,"set_insert":88,"thursday":110,"zip":72,"div":27,"db_drop":58,"skip":70,"insert":56,"may":118,"wait":177,"object":143,"range":173,"http":153,"july":120,"difference":95,"february":115,"outer_join":49,"without":34,"sub":25,"table_drop":61,"geojson":157,"fold":187,"minutes":134,"includes":164,"fill":167,"tuesday":108,"default":92,"date":106,"fn":69,"during":105,"august":121,"index_rename":156,"changes":152,"ceil":184,"binary":155,"limit":71,"offsets_of":87,"reduce":37,"time":136,"var":10,"get_nearest":168,"upcase":141,"type_of":52,"hours":133,"and":67,"polygon":161,"not":23,"asc":73,"match":97,"json":98,"distance":162,"inner_join":48,"filter":39,"minval":180,"set_difference":91,"slice":30,"status":175,"june":119,"round":185,"intersects":163,"sync":138,"monday":107,"make_obj":3,"prepend":80,"pluck":33,"table":15,"between":182,"merge":35,"order_by":41,"is_empty":86,"eq_join":50,"max":148,"day_of_year":132,"floor":183,"avg":146,"march":116,"delete_at":83,"rebalance":179,"seconds":135,"coerce_to":51,"delete":54,"update":53,"index_wait":140,"literal":137,"branch":65,"javascript":11,"sample":81,"epoch_time":101,"january":114,"sunday":113,"get":16,"le":20,"december":125,"db":14,"get_field":31,"april":117,"contains":93,"lt":19,"between_deprecated":36,"datum":1,"index_status":139,"set_intersection":89,"change_at":84,"mul":26,"replace":55,"october":123,"grant":188,"add":24,"table_create":60,"to_iso8601":100,"in_timezone":104,"day_of_week":131,"timezone":127,"maxval":181,"uuid":169,"concat_map":40,"split":149,"eq":17,"year":128,"downcase":142,"friday":111,"db_create":57,"count":43,"union":44,"bracket":170,"to_json_string":172,"insert_at":82,"month":129,"set_union":90,"ge":22,"point":159,"sum":145,"wednesday":109,"line":160,"now":103,"group":144,"november":124,"to_epoch_time":102,"append":29,"table_list":62,"time_of_day":126,"distinct":42,"index_drop":76,"index_list":77,"implicit_var":13,"config":174,"args":154,"values":186,"ne":18,"keys":94,"for_each":68,"circle":165,"iso8601":99,"with_fields":96,"polygon_sub":171} \ No newline at end of file diff --git a/lib/rethinkdb/response.ex b/lib/rethinkdb/response.ex index 691271d..91d05a3 100644 --- a/lib/rethinkdb/response.ex +++ b/lib/rethinkdb/response.ex @@ -19,13 +19,13 @@ end defmodule RethinkDB.Feed do @moduledoc false - defstruct token: nil, data: nil, pid: nil, note: nil, profile: nil + defstruct token: nil, data: nil, pid: nil, note: nil, profile: nil, opts: nil defimpl Enumerable, for: __MODULE__ do def reduce(changes, acc, fun) do stream = Stream.unfold(changes, fn x = %RethinkDB.Feed{data: []} -> - r = RethinkDB.next(x) + {:ok, r} = RethinkDB.next(x) {r, struct(r, data: [])} x = %RethinkDB.Feed{} -> {x, struct(x, data: [])} @@ -45,22 +45,18 @@ defmodule RethinkDB.Response do @moduledoc false defstruct token: nil, data: "", profile: nil - def parse(raw_data, token, pid) do + def parse(raw_data, token, pid, opts) do d = Poison.decode!(raw_data) - data = RethinkDB.Pseudotypes.convert_reql_pseudotypes(d["r"]) - resp = case d["t"] do - 1 -> %RethinkDB.Record{data: hd(data)} - 2 -> %RethinkDB.Collection{data: data} - 3 -> case d["n"] do - [2] -> %RethinkDB.Feed{token: token, data: hd(data), pid: pid, note: d["n"]} - _ -> %RethinkDB.Feed{token: token, data: data, pid: pid, note: d["n"]} - end - 4 -> %RethinkDB.Response{token: token, data: d} - 16 -> %RethinkDB.Response{token: token, data: d} - 17 -> %RethinkDB.Response{token: token, data: d} - 18 -> %RethinkDB.Response{token: token, data: d} + data = RethinkDB.Pseudotypes.convert_reql_pseudotypes(d["r"], opts) + {code, resp} = case d["t"] do + 1 -> {:ok, %RethinkDB.Record{data: hd(data)}} + 2 -> {:ok, %RethinkDB.Collection{data: data}} + 3 -> {:ok, %RethinkDB.Feed{token: token, data: data, pid: pid, note: d["n"], opts: opts}} + 4 -> {:ok, %RethinkDB.Response{token: token, data: d}} + 16 -> {:error, %RethinkDB.Response{token: token, data: d}} + 17 -> {:error, %RethinkDB.Response{token: token, data: d}} + 18 -> {:error, %RethinkDB.Response{token: token, data: d}} end - %{resp | :profile => d["p"]} + {code, %{resp | :profile => d["p"]}} end end - diff --git a/mix.exs b/mix.exs index b785148..9f9d676 100644 --- a/mix.exs +++ b/mix.exs @@ -2,7 +2,7 @@ defmodule RethinkDB.Mixfile do use Mix.Project def project do [app: :rethinkdb, - version: "0.3.2", + version: "0.4.0", elixir: "~> 1.0", description: "RethinkDB driver for Elixir", package: package, @@ -40,7 +40,7 @@ defmodule RethinkDB.Mixfile do use Mix.Project # Type `mix help deps` for more examples and options defp deps do [ - {:poison, "~> 1.5 or ~> 2.0"}, + {:poison, "~> 3.0"}, {:earmark, "~> 0.1", only: :dev}, {:ex_doc, "~> 0.7", only: :dev}, {:flaky_connection, github: "hamiltop/flaky_connection", only: :test}, diff --git a/mix.lock b/mix.lock index 27405d2..02cdc0b 100644 --- a/mix.lock +++ b/mix.lock @@ -1,16 +1,16 @@ -%{"certifi": {:hex, :certifi, "0.3.0"}, - "connection": {:hex, :connection, "1.0.1"}, - "dialyze": {:hex, :dialyze, "0.2.0"}, - "earmark": {:hex, :earmark, "0.1.19"}, - "ex_doc": {:hex, :ex_doc, "0.10.0"}, - "excoveralls": {:hex, :excoveralls, "0.3.11"}, - "exjsx": {:hex, :exjsx, "3.2.0"}, +%{"certifi": {:hex, :certifi, "0.3.0", "389d4b126a47895fe96d65fcf8681f4d09eca1153dc2243ed6babad0aac1e763", [:rebar3], []}, + "connection": {:hex, :connection, "1.0.1", "16bf178158088f29513a34a742d4311cd39f2c52425559d679ecb28a568c5c0b", [:mix], []}, + "dialyze": {:hex, :dialyze, "0.2.0", "ecabf292e9f4bd0f7d844981f899a85c0300b30ff2dd1cdfef0c81a6496466f1", [:mix], []}, + "earmark": {:hex, :earmark, "0.1.19", "ffec54f520a11b711532c23d8a52b75a74c09697062d10613fa2dbdf8a9db36e", [:mix], []}, + "ex_doc": {:hex, :ex_doc, "0.10.0", "f49c237250b829df986486b38f043e6f8e19d19b41101987f7214543f75947ec", [:mix], [{:earmark, "~> 0.1.17 or ~> 0.2", [hex: :earmark, optional: true]}]}, + "excoveralls": {:hex, :excoveralls, "0.3.11", "cd1abaf07db5bed9cf7891d86470247c8b3c8739d7758679071ce1920bb09dbc", [:mix], [{:exjsx, "~> 3.0", [hex: :exjsx, optional: false]}, {:hackney, ">= 0.12.0", [hex: :hackney, optional: false]}]}, + "exjsx": {:hex, :exjsx, "3.2.0", "7136cc739ace295fc74c378f33699e5145bead4fdc1b4799822d0287489136fb", [:mix], [{:jsx, "~> 2.6.2", [hex: :jsx, optional: false]}]}, "flaky_connection": {:git, "https://github.com/hamiltop/flaky_connection.git", "e3a09e7198e1b155f35291ffad438966648a8156", []}, - "hackney": {:hex, :hackney, "1.4.8"}, - "idna": {:hex, :idna, "1.0.3"}, + "hackney": {:hex, :hackney, "1.4.8", "c8c6977ed55cc5095e3929f6d94a6f732dd2e31ae42a7b9236d5574ec3f5be13", [:rebar3], [{:certifi, "0.3.0", [hex: :certifi, optional: false]}, {:idna, "1.0.3", [hex: :idna, optional: false]}, {:mimerl, "1.0.2", [hex: :mimerl, optional: false]}, {:ssl_verify_hostname, "1.0.5", [hex: :ssl_verify_hostname, optional: false]}]}, + "idna": {:hex, :idna, "1.0.3", "d456a8761cad91c97e9788c27002eb3b773adaf5c893275fc35ba4e3434bbd9b", [:rebar3], []}, "inch_ex": {:hex, :inch_ex, "0.2.4"}, - "jsx": {:hex, :jsx, "2.6.2"}, - "mimerl": {:hex, :mimerl, "1.0.2"}, - "poison": {:hex, :poison, "2.0.1"}, - "ranch": {:hex, :ranch, "1.1.0"}, - "ssl_verify_hostname": {:hex, :ssl_verify_hostname, "1.0.5"}} + "jsx": {:hex, :jsx, "2.6.2", "213721e058da0587a4bce3cc8a00ff6684ced229c8f9223245c6ff2c88fbaa5a", [:mix, :rebar], []}, + "mimerl": {:hex, :mimerl, "1.0.2", "993f9b0e084083405ed8252b99460c4f0563e41729ab42d9074fd5e52439be88", [:rebar3], []}, + "poison": {:hex, :poison, "3.1.0", "d9eb636610e096f86f25d9a46f35a9facac35609a7591b3be3326e99a0484665", [:mix], [], "hexpm"}, + "ranch": {:hex, :ranch, "1.1.0", "f7ed6d97db8c2a27cca85cacbd543558001fc5a355e93a7bff1e9a9065a8545b", [:make], []}, + "ssl_verify_hostname": {:hex, :ssl_verify_hostname, "1.0.5", "2e73e068cd6393526f9fa6d399353d7c9477d6886ba005f323b592d389fb47be", [:make], []}} diff --git a/test/cert/host.crt b/test/cert/host.crt index 7c388dd..b2d7e54 100644 --- a/test/cert/host.crt +++ b/test/cert/host.crt @@ -1,20 +1,21 @@ -----BEGIN CERTIFICATE----- -MIIDNDCCAhwCCQDILkh9RqCy+jANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJB -VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0 -cyBQdHkgTHRkMREwDwYDVQQDEwhMb2NhbCBDQTAeFw0xNjAxMjMyMTQxNTNaFw0x -NzAxMjIyMTQxNTNaMGAxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRl -MSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxGTAXBgNVBAMTEGZs -YWt5IGNvbm5lY3Rpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDi -0MdgEkF1PYuyVMeH1gAOAJ4zuvtrx/CTcxa6Z6etepWmc6/BA7vKGiIHnKVf07WY -jnjhLm3Xo6wzO8QlHHNCYAWxkAUtbDKasyqTBln8oTJdaDrHqHjddSUUJubWe0gx -/BiX9agOH6zXRMJMDYlQEPHOR/FY69yVEjAeGVpUyc2yowglK/4dHLPnVkTpqkw6 -HEJ/OFVJ4p06WeeS4GDSQq8TZVUSVE4YCNcpZSP2MMDBDnkEPLVmVbwzoOKGlxtz -lgQz8dZC0fuVe6pk4vOQCHF8KQ3tIwgD9Ua1OZhobDZRgmYZq29NoJW4eus95TE4 -uYmU6RYu+uMQeTFuZHVHAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBALIyuD/z5XID -nLuFEBy1kOIwgndEYR8Y0VFw46uDN1iAcsfjI0nDhyC6iq/UQX13/xNah/DboDGR -KzF3ZtRs3S6y16MrtaASGbQZ2ymvZB+2gDKExue9XV51W0EjO1RFJXlSwqwFVJTc -7c0jQpW3+Ger7I4XVzkk4Bom39slZmHbGzPtXM0n5KYPvirWolD/P6tXdgN6B00g -NhQqJVbXuLbkDaEE99NAgOTv9aLRXdj5M2Ye2N1BzUMKs2zPHUIqXsn0Zy1kFSmh -Lr1ee1VZZiBexw40ZxZNiCDKbBgJCyctXolmr72xr8lOpK0XdFVpR8Z7+1VAeSAc -Z3O7+yFOWfk= +MIIDeDCCAmACCQCXLb1LngVNuTANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC +VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 +aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI +hvcNAQkBFg1mb29AZ21haWwuY29tMB4XDTE3MDIxOTIzMzMxNVoXDTE4MDcwNDIz +MzMxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMQswCQYDVQQHEwJOWTEh +MB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMREwDwYDVQQDEwgxMC4w +LjAuMTEcMBoGCSqGSIb3DQEJARYNZm9vQGdtYWlsLmNvbTCCASIwDQYJKoZIhvcN +AQEBBQADggEPADCCAQoCggEBAKGKWJJ3NDXapqJbJTwz9TGkBb4Re1o793BkFbzk +XJpohRcbEMQWK3fzzjFU2NzDV7uDFR5Em6GBP9piGa8SGM4WgFUu6alRXSYRJ/BB +QRX5Qm+MTMDhYIRjZAQiOVCVLHXl/eWMxrItffyUVLe8NQWDIOz8UoUWMrTlpWsi +kSNUVjWOhYZGRHQcyriRxua35S7mCtk6DW0RKU0nG9cB7Nyc9YYKxpHT63Ki+/FH +gmqAF1cJ0OqtN27FMY2aHgT3HvRbbtLGHCnkZa4HErmrj7rlXoacf0bVJYYYB7AG +thfcK7nwUCmUdWNwS829WdV42Tvv12Ww6eZNSkiFgD9KEssCAwEAATANBgkqhkiG +9w0BAQsFAAOCAQEAhDzL0UsKN6Yxbce3QpWAaHTyFZU+NPckPm66GyEmwBMuvLVr +d2oMzqOXZ3AW+rydh4i0GYZQYK2KXUgTxYfIz3fvylU0g4rlHI/Ej6gnFJ5g2k8v +h2FLY6mp1SULVopxqURWQPIPm+ztz/wQYPmB1W9W8aQYdEBgoIAmKvxRnBmU7SuP +L2sQmoPnh9pCCdS3djXLoj9pCUe7YDJCnxqOe8zpH3FOIykdCfsIphpPs4Mkw+LY +N1+KHBoRwkj0JBqwaNLF3sjkXgi0v06l4DZ7WAy3Q2k3QD8tuiSFEM0g4/2y58Ts +iFSH2inRL4NJIew2kx+IBHEQDxffgA62zhjxVw== -----END CERTIFICATE----- diff --git a/test/cert/host.key b/test/cert/host.key index 0fff5d0..28a6825 100644 --- a/test/cert/host.key +++ b/test/cert/host.key @@ -1,27 +1,27 @@ -----BEGIN RSA PRIVATE KEY----- -MIIEowIBAAKCAQEA4tDHYBJBdT2LslTHh9YADgCeM7r7a8fwk3MWumenrXqVpnOv -wQO7yhoiB5ylX9O1mI544S5t16OsMzvEJRxzQmAFsZAFLWwymrMqkwZZ/KEyXWg6 -x6h43XUlFCbm1ntIMfwYl/WoDh+s10TCTA2JUBDxzkfxWOvclRIwHhlaVMnNsqMI -JSv+HRyz51ZE6apMOhxCfzhVSeKdOlnnkuBg0kKvE2VVElROGAjXKWUj9jDAwQ55 -BDy1ZlW8M6Dihpcbc5YEM/HWQtH7lXuqZOLzkAhxfCkN7SMIA/VGtTmYaGw2UYJm -GatvTaCVuHrrPeUxOLmJlOkWLvrjEHkxbmR1RwIDAQABAoIBAFCqh/33ACi+Nsy5 -sizxQxu3xIwJWBnBBiKqr86jxtK/4jFMu5kdxs/d83RZlcc6+D7FjOApLw+eOkQO -YXgBYkyc8elwmybIcEbsqZuYirB6c/sccqtHk5TPcNx16WsmdUqxqd2BlL9RLJty -7Nc3iTpcjGMc2w0Q9WZfDZXm1mWIpHfHYktUsDhSrm/zpTEUHgxsVSu36oXDllHF -NYVNseHoISZBKMG95YnDEQJOaSOgfx2DNDUOk8LZ9ucrb6/BeQh/a4OyJIAnsL0W -bY+VJw5Qt9dODffrJ3BXDxhCjCtahgrtX08SCYq4T/X9mvub0RDLLfoO9NyNtkyT -9Qx0xUECgYEA/CltD0FNOuSZms4gBLgWTLwT2JFbjAY64705cKd3hDGGgnBCWVra -i5zULeTHm/Z7mjQvUBHv1+imMw/RL9QygCr1yaGbaWoWcuBGX8y4B2slwHercbrg -2c9MRnImsySsblL9e3+jkRpi3U/UEBuMbSvCWG1i/9ufnwFz6h4Zc88CgYEA5kSW -pZtCaJMz5lOTmMFm2BP+0ZyHnl3neJWPGL4fdUHF2Z7wCAe2e8vRHyBVLdM6olUj -gF8BPy7k5UF8D7Z3UESI/OYZtQEEdCNyPetkRT80QIw5XhjDyoa+iNvEsHrP47NE -h4bnqHYWXtjia0QwS7XXywofQD5T/EpOUmycLQkCgYEAuW+xfw1zwQKJn1lEHJRP -+eA57AKBQ0j1l7L5Acp1zuYo19W8RT/WBeOv+YwL6rrpjK4huQ1nxuyVBGn2WOkA -tlZhaAULaAsXNSWPOzYug24dVhvrHXjjj+mtWwTpRsaKc5teQ6rK25N+7uecuLe5 -njMW+bZ/nk6hZOpJlvrJlusCgYA8jXjYH9X8zgjt3riHiQRUeh2eXX1EZglCqoGw -zf5TxXIT4jnYwr54G4bomoYLwOpAWgc18MXRKbHDn87SCvehQgSMDK5h7NyQ9elK -4yXBF/fTqYxEdFq4XWqpbrFwfzs/85pn0VAF+tezJXGVJ59TqYQPvp+tMza+t4OV -JT6EkQKBgBWVXS4aRIMl/qtJxP4NgyT4tEFxkfYsuAqACebR3CFxsDr71JWKwwX1 -1cuGJ7R9ZqwYmx2YjRStbZTqjip66JUbkxwpZehTWkfaokepbKnPb70vJOuyS1lo -RL9z6QnT2zALGcuCRW3aiD7VPVSZyZCIcEZanosleSWwmT7Fei3/ +MIIEogIBAAKCAQEAoYpYknc0NdqmolslPDP1MaQFvhF7Wjv3cGQVvORcmmiFFxsQ +xBYrd/POMVTY3MNXu4MVHkSboYE/2mIZrxIYzhaAVS7pqVFdJhEn8EFBFflCb4xM +wOFghGNkBCI5UJUsdeX95YzGsi19/JRUt7w1BYMg7PxShRYytOWlayKRI1RWNY6F +hkZEdBzKuJHG5rflLuYK2ToNbREpTScb1wHs3Jz1hgrGkdPrcqL78UeCaoAXVwnQ +6q03bsUxjZoeBPce9Ftu0sYcKeRlrgcSuauPuuVehpx/RtUlhhgHsAa2F9wrufBQ +KZR1Y3BLzb1Z1XjZO+/XZbDp5k1KSIWAP0oSywIDAQABAoIBAGqOdYp3syrrBgwG +j3M82rpZ9afApFuLPtcWTfiBskvwMgphwhd2gEnputNzonFNMavw9Zc3rmlEdrg5 +CbQf/djDoveNsHgNwaIAoxWqFaLG/vnR1DdO83mgjjLj2Ga9X8yNX4Nx7wdNVtOr +jI5+SYNPUgLBFjXPxLbq3MjkzlQ8myki26qvXB9GjgF3gWoCz1q6dvjhCDtithNd +NoHQUI5YmjuJas3gZ7lUu21MuOAWi0oPFuMMznjPgaaZ3vJVMxmLc/YsnryqbAoA +ZQhn8ZNeXxUDI2mxiuk/bYgQAqRxHqogvL4u2tmm/L9FSSHVJgB3IjoDMiTYQjbk +hZg+A3ECgYEAzldbDxYv3aj02sDidMFQvCVIYJ/v1iJDBFVNN148Y+5b64xvGbVM +7XkHmYQctg/jIASGiYjGsyVpRT+fRvRleHjNrO4FyWE317nUjER1EZblpZiqeXW0 +c+Lf4rKBN3z5eNJRbO/GWAmSvjs33Xdt68YNeRB4v5NA/b01tXp3NlcCgYEAyGrR +/ybLBh9D86mgW28zBD39TdLdvtcNSTWPhu/Ceh9KqhaUZrc/jb8StFxG/Cx5S/oc +F3BHypGkD1NZDAc5NJoMPQixd1BZB3F9EBmJk219KttimAiM27CKmF+7RbSfhu8P +3uK1oLSo3sVOhGQFpXQZ2g51B1+ltAboTLBcNq0CgYB/BzRd01DgaxViXoCLVD95 +tJIcOhoSf8E2N7VzsqYG90TLfAchkoWrZGkTT0vFoX43xdF1diitPQjTwtkxe1/E +jMpB/b6+PQV93z9EoxhXHch+6793Ssku1qryCuaV3HBQu1m5cNtwc2RNjHNV+iJH +lgPRVhygA+1syED6WkxtvQKBgEjZe0exxC6PgtW5HM7flr2+AqsdMPlDllK8I1W7 +JQfbA/rbhknn5jQR9iyVNkBHsjeJzFhAuffKBMaFV2Ll5UdXj4dH96oVDKeF+x21 +CqsKK2s+n5H/2aOpgldsxNfLlgkoMK6l3btyr8d6FNZOvTatAxCeHK/3dnX/5MSr +fnlpAoGADad3tXN+vJARd/vXHfsHZp8/nF+hDrteEC8jwv//l+10TErWDZvxxf+8 +vgK6zWZogitNjmj49A8/PzNJY45JiCodo1z2D22jP4fqT8okzax7+i+v6noIFVSA +ikQbv1yjcf3CeVsihHs3bvcxAW+fF0yOQ1uNIL97ThS2gIJwAwY= -----END RSA PRIVATE KEY----- diff --git a/test/cert/rootCA.crt b/test/cert/rootCA.crt deleted file mode 100644 index 38c4778..0000000 --- a/test/cert/rootCA.crt +++ /dev/null @@ -1,24 +0,0 @@ ------BEGIN CERTIFICATE----- -MIID8DCCAtigAwIBAgIJAKFx3RUBfNaFMA0GCSqGSIb3DQEBBQUAMFgxCzAJBgNV -BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX -aWRnaXRzIFB0eSBMdGQxETAPBgNVBAMTCExvY2FsIENBMB4XDTE2MDEyMzIxNDE0 -NloXDTE3MDEyMjIxNDE0NlowWDELMAkGA1UEBhMCQVUxEzARBgNVBAgTClNvbWUt -U3RhdGUxITAfBgNVBAoTGEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDERMA8GA1UE -AxMITG9jYWwgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/UUzH -ENfrO249kNYurLYdHzgfW9SOP7YqGpX+TLDIQB1uxrArZyXv4/LnH2/OznZziPWJ -TWj5E+Z+jYQapD7kE8SehsJm7Qb0xwbtpdmQNta4BlBYD6SBNMEd00SoPhQ17FPI -p5GLHsGoaPvVxfGqiDi3PJighypdfSZy8cvWAEatE+7RxZ6hf8jYKxmu0w53XVKl -8hgyYnc/3m8r+xEdEI3PTCHSEOHuwbyLxmNXFkAKDUzcg9L5tqvmf9kv6Z0J3huJ -JRJM6alkb6Epm1OdxsAPS2i1RrP+JzSfLld4jyExz4u+n/1XOBy+fFzcpy8hfleS -l6DJKdLcd/dSqNhBAgMBAAGjgbwwgbkwHQYDVR0OBBYEFPEhryKW3x/AByb/6VqN -rliGjnRNMIGJBgNVHSMEgYEwf4AU8SGvIpbfH8AHJv/pWo2uWIaOdE2hXKRaMFgx -CzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRl -cm5ldCBXaWRnaXRzIFB0eSBMdGQxETAPBgNVBAMTCExvY2FsIENBggkAoXHdFQF8 -1oUwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAPzdoq/jNMIIt6ZeD -6BFOWV6+p7clHtsc3zrRjY0MxJs9AWjJsDyp4AZnXPJ75FwYYznNEAXsJgK0Gd77 -SciSCxToEQdGZ4RKfZM/O4Mi8vDZ1bIPk2aXfebcn5Q+NCu9MN/nGGbxs9Q+vpeP -6tbDhUqK0vXzPccIY2TgzPM/cwIkc9BSFNtk3oahBzSAAtX7RFlS2BM51DeUvvfN -EIfhHQEQN+PBL1s9qsJm0j3gkrZhybUSgSYh6Pa+fSFzP7VyzL7dnUsgdnvdM/H/ -HFub6vDdHkUB3PcCW60SarfxKxBOfXr5no7HgnWq919mGYwiWopB91ZCgN+3MO7M -H8xvqw== ------END CERTIFICATE----- diff --git a/test/cert/rootCA.pem b/test/cert/rootCA.pem new file mode 100644 index 0000000..a53f0f0 --- /dev/null +++ b/test/cert/rootCA.pem @@ -0,0 +1,26 @@ +-----BEGIN CERTIFICATE----- +MIIEbjCCA1agAwIBAgIJAOQ/G0GKEPPHMA0GCSqGSIb3DQEBCwUAMIGAMQswCQYD +VQQGEwJVUzELMAkGA1UECBMCTlkxETAPBgNVBAcTCE5ldyBZb3JrMRkwFwYDVQQK +ExBSZXRoaW5rREIgRWxpeGlyMRgwFgYDVQQDEw9mb28uZXhhbXBsZS5jb20xHDAa +BgkqhkiG9w0BCQEWDWZvb0BnbWFpbC5jb20wHhcNMTcwMjE5MjMzMDQzWhcNMTkx +MjEwMjMzMDQzWjCBgDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQH +EwhOZXcgWW9yazEZMBcGA1UEChMQUmV0aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMP +Zm9vLmV4YW1wbGUuY29tMRwwGgYJKoZIhvcNAQkBFg1mb29AZ21haWwuY29tMIIB +IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv6zwu/jO6Y0WNO1jw6zCLdNu +MBy7vjBY0mnF4MUZocP5VSxOE7OI/2KPS1cIbVBfbSQgKVDsl3T17JgZpUC1INlu +u3J1J4UFUSXBRXZeGKTGaePtuLP5FwGabux18m42IgsBn0nA69QeTZCVnlYjsUoC +UoJe+uldOanZQmylZQjO8alz5YjNw2T5YNE4laAutU9tPMJgJWRzG8+mUacY5/Y/ +yVly2uPZ1CxG146MpHcQsApIMgovF9Uuuxjudf9nz4XOrjnUn6E5GxMh9HKBiQd8 +XbfH8DrDZf5ZquyytAtJ6T5m92mkje7T+76mOOKmKoDSl8xsDQgkUirOTb5g3wID +AQABo4HoMIHlMB0GA1UdDgQWBBRDJaAOuwlhj7eIeILZFmN8qyVW8DCBtQYDVR0j +BIGtMIGqgBRDJaAOuwlhj7eIeILZFmN8qyVW8KGBhqSBgzCBgDELMAkGA1UEBhMC +VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 +aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI +hvcNAQkBFg1mb29AZ21haWwuY29tggkA5D8bQYoQ88cwDAYDVR0TBAUwAwEB/zAN +BgkqhkiG9w0BAQsFAAOCAQEAUpJVRQ0Bsy7jyLMTqKmR6qGiYKBM2/AaMRbn5pqi +0Uz/3Fu9a9POzI18j7ZxDD7HVGZvdjKc/d6+jx6PntReuXYwdkIjW19oBihYp3op +iaA7ZU0nAsefeyVcmfPtM+Kn3OW5/uIgYVIOiSLfWT4HVQxnKOWdVfaYieoz1gRO +wdXipeHwtsfjz3sDjJBoBIWdtysEPYsVCkAEcvlji6ugwWJ8SBqzdZl/NjWvecgW +ppSzO46l6WAZJxLdapAddOucrFSCbGAdf3WmHHRURCVaCbRhyBwDJh+vq76Imkh4 +M13jlo5+4K2NF9QCUEzwnC47uUOp1HqGGUoaeW4nTA07wg== +-----END CERTIFICATE----- diff --git a/test/changes_test.exs b/test/changes_test.exs index a242243..f7179d3 100644 --- a/test/changes_test.exs +++ b/test/changes_test.exs @@ -22,7 +22,7 @@ defmodule ChangesTest do test "first change" do q = table(@table_name) |> changes - changes = %Feed{} = run(q) + {:ok, changes = %Feed{}} = run(q) t = Task.async fn -> changes |> Enum.take(1) @@ -35,15 +35,15 @@ defmodule ChangesTest do test "changes" do q = table(@table_name) |> changes - changes = %Feed{} = run(q) + {:ok, changes} = {:ok, %Feed{}} = run(q) t = Task.async fn -> RethinkDB.Connection.next(changes) end data = %{"test" => "data"} q = table(@table_name) |> insert(data) - res = run(q) + {:ok, res} = run(q) expected = res.data["id"] - changes = Task.await(t) + {:ok, changes} = Task.await(t) ^expected = changes.data |> hd |> Map.get("id") # test Enumerable @@ -57,4 +57,46 @@ defmodule ChangesTest do data = Task.await(t) 5 = Enum.count(data) end + + test "point_changes" do + q = table(@table_name) |> get("0") |> changes + {:ok, changes} = {:ok, %Feed{}} = run(q) + t = Task.async fn -> + changes |> Enum.take(1) + end + data = %{"id" => "0"} + q = table(@table_name) |> insert(data) + {:ok, res} = run(q) + expected = res.data["id"] + [h|[]] = Task.await(t) + assert %{"new_val" => %{"id" => "0"}} = h + end + + test "changes opts binary native" do + q = table(@table_name) |> get("0") |> changes + {:ok, changes} = {:ok, %Feed{}} = run(q) + t = Task.async fn -> + changes |> Enum.take(1) + end + data = %{"id" => "0", "binary" => binary(<<1>>)} + q = table(@table_name) |> insert(data) + {:ok, res} = run(q) + expected = res.data["id"] + [h|[]] = Task.await(t) + assert %{"new_val" => %{"id" => "0", "binary" => <<1>>}} = h + end + + test "changes opts binary raw" do + q = table(@table_name) |> get("0") |> changes + {:ok, changes} = {:ok, %Feed{}} = run(q, [binary_format: :raw]) + t = Task.async fn -> + changes |> Enum.take(1) + end + data = %{"id" => "0", "binary" => binary(<<1>>)} + q = table(@table_name) |> insert(data) + {:ok, res} = run(q) + expected = res.data["id"] + [h|[]] = Task.await(t) + assert %{"new_val" => %{"id" => "0", "binary" => %RethinkDB.Pseudotypes.Binary{data: "AQ=="}}} = h + end end diff --git a/test/connection_test.exs b/test/connection_test.exs index d9cedba..e95c2c8 100644 --- a/test/connection_test.exs +++ b/test/connection_test.exs @@ -32,7 +32,7 @@ defmodule ConnectionTest do %RethinkDB.Exception.ConnectionClosed{} = table_list |> run conn = FlakyConnection.start('localhost', 28015, [local_port: 28014]) :timer.sleep(1000) - %RethinkDB.Record{} = RethinkDB.Query.table_list |> run + {:ok, %RethinkDB.Record{}} = RethinkDB.Query.table_list |> run ref = Process.monitor(c) FlakyConnection.stop(conn) receive do @@ -53,7 +53,7 @@ defmodule ConnectionTest do GenServer.cast(__MODULE__, :stop) end table(table) |> index_wait |> run - change_feed = table(table) |> changes |> run + {:ok, change_feed} = table(table) |> changes |> run task = Task.async fn -> RethinkDB.Connection.next change_feed end @@ -84,7 +84,7 @@ defmodule ConnectionTest do {:ok, c} = RethinkDB.Connection.start_link(db: "new_test") db_create("new_test") |> RethinkDB.run(c) db("new_test") |> table_create("new_test_table") |> RethinkDB.run(c) - %{data: data} = table_list |> RethinkDB.run(c) + {:ok, %{data: data}} = table_list |> RethinkDB.run(c) assert data == ["new_test_table"] end @@ -108,8 +108,8 @@ defmodule ConnectionTest do test "ssl connection" do conn = FlakyConnection.start('localhost', 28015, [ssl: [keyfile: "./test/cert/host.key", certfile: "./test/cert/host.crt"]]) - {:ok, c} = RethinkDB.Connection.start_link(port: conn.port, ssl: [ca_certs: ["./test/cert/rootCA.crt"]], sync_connect: true) - %{data: _} = table_list |> RethinkDB.run(c) + {:ok, c} = RethinkDB.Connection.start_link(port: conn.port, ssl: [ca_certs: ["./test/cert/rootCA.pem"]], sync_connect: true) + {:ok, %{data: _}} = table_list |> RethinkDB.run(c) end end @@ -127,7 +127,7 @@ defmodule ConnectionRunTest do db_create("db_option_test") |> run table_create("db_option_test_table") |> run(db: "db_option_test") - %{data: data} = db("db_option_test") |> table_list |> run + {:ok, %{data: data}} = db("db_option_test") |> table_list |> run db_drop("db_option_test") |> run @@ -135,7 +135,8 @@ defmodule ConnectionRunTest do end test "run(conn, opts) with :durability option" do - response = table_create("durability_test_table") |> run(durability: "soft") + table_drop("durability_test_table") |> run + {:ok, response} = table_create("durability_test_table") |> run(durability: "soft") durability = response.data["config_changes"] |> List.first |> Map.fetch!("new_val") @@ -152,7 +153,7 @@ defmodule ConnectionRunTest do end test "run with :profile options" do - resp = make_array([1,2,3]) |> run(profile: true) + {:ok, resp} = make_array([1,2,3]) |> run(profile: true) assert [%{"description" => _, "duration(ms)" => _, "sub_tasks" => _}] = resp.profile end diff --git a/test/query/administration_query_test.exs b/test/query/administration_query_test.exs index 3775154..b3fadef 100644 --- a/test/query/administration_query_test.exs +++ b/test/query/administration_query_test.exs @@ -18,27 +18,27 @@ defmodule AdministrationQueryTest do end test "config" do - r = table(@table_name) |> config |> run + {:ok, r} = table(@table_name) |> config |> run assert %RethinkDB.Record{data: %{"db" => "test"}} = r end test "rebalance" do - r = table(@table_name) |> rebalance |> run + {:ok, r} = table(@table_name) |> rebalance |> run assert %RethinkDB.Record{data: %{"rebalanced" => _}} = r end test "reconfigure" do - r = table(@table_name) |> reconfigure(shards: 1, dry_run: true, replicas: 1) |> run + {:ok, r} = table(@table_name) |> reconfigure(shards: 1, dry_run: true, replicas: 1) |> run assert %RethinkDB.Record{data: %{"reconfigured" => _}} = r end test "status" do - r = table(@table_name) |> status |> run + {:ok, r} = table(@table_name) |> status |> run assert %RethinkDB.Record{data: %{"name" => @table_name}} = r end test "wait" do - r = table(@table_name) |> wait(wait_for: :ready_for_writes) |> run + {:ok, r} = table(@table_name) |> wait(wait_for: :ready_for_writes) |> run assert %RethinkDB.Record{data: %{"ready" => 1}} = r end end diff --git a/test/query/aggregation_test.exs b/test/query/aggregation_test.exs index 8474c51..bd5d716 100644 --- a/test/query/aggregation_test.exs +++ b/test/query/aggregation_test.exs @@ -21,7 +21,7 @@ defmodule AggregationTest do %{a: "bye"} ] |> group("a") - %Record{data: data} = query |> run + {:ok, %Record{data: data}} = query |> run assert data == %{ "bye" => [ %{"a" => "bye"} @@ -43,7 +43,7 @@ defmodule AggregationTest do |> group(lambda fn (x) -> (x["a"] == "hi") || (x["a"] == "hello") end) - %Record{data: data} = query |> run + {:ok, %Record{data: data}} = query |> run assert data == %{ false: [ %{"a" => "bye"}, @@ -67,7 +67,7 @@ defmodule AggregationTest do |> group([lambda(fn (x) -> (x["a"] == "hi") || (x["a"] == "hello") end), "b"]) - %Record{data: data} = query |> run + {:ok, %Record{data: data}} = query |> run assert data == %{ [false, nil] => [ %{"a" => "bye"}, @@ -95,7 +95,7 @@ defmodule AggregationTest do (x["a"] == "hi") || (x["a"] == "hello") end), "b"]) |> ungroup - %Record{data: data} = query |> run + {:ok, %Record{data: data}} = query |> run assert data == [ %{ "group" => [false, nil], @@ -124,19 +124,19 @@ defmodule AggregationTest do query = [1,2,3,4] |> reduce(lambda fn(el, acc) -> el + acc end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 10 end test "count" do query = [1,2,3,4] |> count - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 4 end test "count with value" do query = [1,2,2,3,4] |> count(2) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 2 end @@ -144,19 +144,19 @@ defmodule AggregationTest do query = [1,2,2,3,4] |> count(lambda fn(x) -> rem(x, 2) == 0 end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 3 end test "sum" do query = [1,2,3,4] |> sum - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 10 end test "sum with field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> sum("a") - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 3 end @@ -168,19 +168,19 @@ defmodule AggregationTest do x * 2 end end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 18 end test "avg" do query = [1,2,3,4] |> avg - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 2.5 end test "avg with field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> avg("a") - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 1.5 end @@ -192,25 +192,25 @@ defmodule AggregationTest do x * 2 end end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 6 end test "min" do query = [1,2,3,4] |> Query.min - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 1 end test "min with field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.min("b") - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == %{"b" => 3} end test "min with subquery field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.min(Query.downcase("B")) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == %{"b" => 3} end @@ -223,25 +223,25 @@ defmodule AggregationTest do x * 2 end end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 2 end test "max" do query = [1,2,3,4] |> Query.max - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 4 end test "max with field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.max("b") - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == %{"b" => 4} end test "max with subquery field" do query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.max(Query.downcase("B")) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == %{"b" => 4} end @@ -253,13 +253,13 @@ defmodule AggregationTest do x * 2 end end) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == 3 end test "distinct" do query = [1,2,3,3,4,4,5] |> distinct - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == [1,2,3,4,5] end @@ -270,31 +270,31 @@ defmodule AggregationTest do test "contains" do query = [1,2,3,4] |> contains(4) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == true end test "contains multiple values" do query = [1,2,3,4] |> contains([4, 3]) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == true end test "contains with function" do query = [1,2,3,4] |> contains(lambda &(&1 == 3)) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == true end test "contains with multiple function" do query = [1,2,3,4] |> contains([lambda(&(&1 == 3)), lambda(&(&1 == 5))]) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == false end test "contains with multiple (mixed)" do query = [1,2,3,4] |> contains([lambda(&(&1 == 3)), 2]) - %Record{data: data} = run query + {:ok, %Record{data: data}} = run query assert data == true end end diff --git a/test/query/control_structures_adv_test.exs b/test/query/control_structures_adv_test.exs index 05b4ba3..d10cdb5 100644 --- a/test/query/control_structures_adv_test.exs +++ b/test/query/control_structures_adv_test.exs @@ -28,7 +28,7 @@ defmodule ControlStructuresAdvTest do table_query |> insert(%{a: x}) end) run q - %Collection{data: data} = run table_query + {:ok, %Collection{data: data}} = run table_query assert Enum.count(data) == 3 end end diff --git a/test/query/control_structures_test.exs b/test/query/control_structures_test.exs index b545162..dbd5bfb 100644 --- a/test/query/control_structures_test.exs +++ b/test/query/control_structures_test.exs @@ -13,100 +13,120 @@ defmodule ControlStructuresTest do test "args" do q = [%{a: 5, b: 6}, %{a: 4, c: 7}] |> pluck(args(["a","c"])) - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == [%{"a" => 5}, %{"a" => 4, "c" => 7}] end - test "binary" do + test "binary raw" do d = << 220, 2, 3, 4, 5, 192 >> q = binary d - %Record{data: data} = run q - assert data == %RethinkDB.Pseudotypes.Binary{data: d} + {:ok, %Record{data: data}} = run q, [binary_format: :raw] + assert data == %RethinkDB.Pseudotypes.Binary{data: :base64.encode(d)} q = binary data - %Record{data: result} = run q + {:ok, %Record{data: result}} = run q, [binary_format: :raw] + assert data == result + end + + test "binary native" do + d = << 220, 2, 3, 4, 5, 192 >> + q = binary d + {:ok, %Record{data: data}} = run q + assert data == d + q = binary data + {:ok, %Record{data: result}} = run q, [binary_format: :native] + assert data == result + end + + test "binary native no wrapper" do + d = << 220, 2, 3, 4, 5, 192 >> + q = d + {:ok, %Record{data: data}} = run q + assert data == d + q = data + {:ok, %Record{data: result}} = run q, [binary_format: :native] assert data == result end test "do_r" do q = do_r fn -> 5 end - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == 5 q = [1,2,3] |> do_r(fn x -> x end) - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == [1,2,3] end test "branch" do q = branch(true, 1, 2) - %Record{data: data} = run q - assert data == 1 + {:ok, %Record{data: data}} = run q + assert data == 1 q = branch(false, 1, 2) - %Record{data: data} = run q - assert data == 2 + {:ok, %Record{data: data}} = run q + assert data == 2 end test "error" do q = do_r(fn -> error("hello") end) - %Response{data: data} = run q + {:error, %Response{data: data}} = run q assert data["r"] == ["hello"] end test "default" do q = 1 |> default("test") - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == 1 q = nil |> default("test") - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == "test" end test "js" do q = js "[40,100,1,5,25,10].sort()" - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == [1,10,100,25,40,5] # couldn't help myself... end test "coerce_to" do q = "91" |> coerce_to("number") - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == 91 end test "type_of" do q = "91" |> type_of - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == "STRING" q = 91 |> type_of - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == "NUMBER" q = [91] |> type_of - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == "ARRAY" end test "info" do q = [91] |> info - %Record{data: %{"type" => type}} = run q + {:ok, %Record{data: %{"type" => type}}} = run q assert type == "ARRAY" end test "json" do q = "{\"a\": 5, \"b\": 6}" |> json - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == %{"a" => 5, "b" => 6} end test "http" do q = "http://httpbin.org/get" |> http - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q %{"args" => %{}, "headers" => _, "origin" => _, "url" => "http://httpbin.org/get"} = data end test "uuid" do - q = uuid - %Record{data: data} = run q + q = uuid + {:ok, %Record{data: data}} = run q assert String.length(String.replace(data, "-", "")) == 32 end end diff --git a/test/query/database_test.exs b/test/query/database_test.exs index 0ad6238..a2d0dcc 100644 --- a/test/query/database_test.exs +++ b/test/query/database_test.exs @@ -27,17 +27,17 @@ defmodule DatabaseTest do test "databases" do q = db_create(@db_name) - %Record{data: %{"dbs_created" => 1}} = run(q) + {:ok, %Record{data: %{"dbs_created" => 1}}} = run(q) q = db_list - %Record{data: dbs} = run(q) + {:ok, %Record{data: dbs}} = run(q) assert Enum.member?(dbs, @db_name) q = db_drop(@db_name) - %Record{data: %{"dbs_dropped" => 1}} = run(q) + {:ok, %Record{data: %{"dbs_dropped" => 1}}} = run(q) q = db_list - %Record{data: dbs} = run(q) + {:ok, %Record{data: dbs}} = run(q) assert !Enum.member?(dbs, @db_name) end end diff --git a/test/query/date_time_test.exs b/test/query/date_time_test.exs index d6532a2..8f80ca5 100644 --- a/test/query/date_time_test.exs +++ b/test/query/date_time_test.exs @@ -11,41 +11,70 @@ defmodule DateTimeTest do :ok end - test "now" do - %Record{data: data} = now |> run + test "now native" do + {:ok, %Record{data: data}} = now |> run + assert %DateTime{} = data + end + + test "now raw" do + {:ok, %Record{data: data}} = now |> run [time_format: :raw] assert %Time{} = data end - test "time" do - %Record{data: data} = time(1970,1,1,"Z") |> run + test "time native" do + {:ok, %Record{data: data}} = time(1970,1,1,"Z") |> run + assert data == DateTime.from_unix!(0, :milliseconds) + {:ok, %Record{data: data}} = time(1970,1,1,0,0,1,"Z") |> run [binary_format: :native] + assert data == DateTime.from_unix!(1000, :milliseconds) + end + + test "time raw" do + {:ok, %Record{data: data}} = time(1970,1,1,"Z") |> run [time_format: :raw] assert data.epoch_time == 0 - %Record{data: data} = time(1970,1,1,0,0,1,"Z") |> run + {:ok, %Record{data: data}} = time(1970,1,1,0,0,1,"Z") |> run [time_format: :raw] assert data.epoch_time == 1 end - test "epoch_time" do - %Record{data: data} = epoch_time(1) |> run + test "epoch_time native" do + {:ok, %Record{data: data}} = epoch_time(1) |> run + assert data == DateTime.from_unix!(1000, :milliseconds) + end + + test "epoch_time raw" do + {:ok, %Record{data: data}} = epoch_time(1) |> run [time_format: :raw] assert data.epoch_time == 1 assert data.timezone == "+00:00" end - test "iso8601" do - %Record{data: data} = iso8601("1970-01-01T00:00:00+00:00") |> run + test "iso8601 native" do + {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run + assert data == DateTime.from_unix!(0, :milliseconds) + {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run + assert data == DateTime.from_unix!(-3600000, :milliseconds) |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") + end + + test "iso8601 raw" do + {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run [time_format: :raw] assert data.epoch_time == 0 assert data.timezone == "+00:00" - %Record{data: data} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run + {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run [time_format: :raw] assert data.epoch_time == -3600 assert data.timezone == "+01:00" end - test "in_timezone" do - %Record{data: data} = epoch_time(0) |> in_timezone("+01:00") |> run + test "in_timezone native" do + {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run + assert data == DateTime.from_unix!(0, :milliseconds) |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") + end + + test "in_timezone raw" do + {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run [time_format: :raw] assert data.timezone == "+01:00" assert data.epoch_time == 0 end test "timezone" do - %Record{data: data} = %Time{epoch_time: 0, timezone: "+01:00"} |> timezone |> run + {:ok, %Record{data: data}} = %Time{epoch_time: 0, timezone: "+01:00"} |> timezone |> run assert data == "+01:00" end @@ -53,69 +82,74 @@ defmodule DateTimeTest do a = epoch_time(5) b = epoch_time(10) c = epoch_time(7) - %Record{data: data} = c |> during(a,b) |> run + {:ok, %Record{data: data}} = c |> during(a,b) |> run assert data == true - %Record{data: data} = b |> during(a,c) |> run + {:ok, %Record{data: data}} = b |> during(a,c) |> run assert data == false end - test "date" do - %Record{data: data} = epoch_time(5) |> date |> run + test "date native" do + {:ok, %Record{data: data}} = epoch_time(5) |> date |> run + assert data == DateTime.from_unix!(0, :milliseconds) + end + + test "date raw" do + {:ok, %Record{data: data}} = epoch_time(5) |> date |> run [time_format: :raw] assert data.epoch_time == 0 end test "time_of_day" do - %Record{data: data} = epoch_time(60*60*24 + 15) |> time_of_day |> run + {:ok, %Record{data: data}} = epoch_time(60*60*24 + 15) |> time_of_day |> run assert data == 15 end test "year" do - %Record{data: data} = epoch_time(2*365*60*60*24) |> year |> run + {:ok, %Record{data: data}} = epoch_time(2*365*60*60*24) |> year |> run assert data == 1972 end test "month" do - %Record{data: data} = epoch_time(2*30*60*60*24) |> month |> run + {:ok, %Record{data: data}} = epoch_time(2*30*60*60*24) |> month |> run assert data == 3 end test "day" do - %Record{data: data} = epoch_time(3*60*60*24) |> day |> run - assert data == 4 + {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day |> run + assert data == 4 end test "day_of_week" do - %Record{data: data} = epoch_time(3*60*60*24) |> day_of_week |> run - assert data == 7 + {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day_of_week |> run + assert data == 7 end test "day_of_year" do - %Record{data: data} = epoch_time(3*60*60*24) |> day_of_year |> run + {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day_of_year |> run assert data == 4 end test "hours" do - %Record{data: data} = epoch_time(3*60*60) |> hours |> run + {:ok, %Record{data: data}} = epoch_time(3*60*60) |> hours |> run assert data == 3 end test "minutes" do - %Record{data: data} = epoch_time(3*60) |> minutes |> run + {:ok, %Record{data: data}} = epoch_time(3*60) |> minutes |> run assert data == 3 end test "seconds" do - %Record{data: data} = epoch_time(3) |> seconds |> run + {:ok, %Record{data: data}} = epoch_time(3) |> seconds |> run assert data == 3 end test "to_iso8601" do - %Record{data: data} = epoch_time(3) |> to_iso8601 |> run + {:ok, %Record{data: data}} = epoch_time(3) |> to_iso8601 |> run assert data == "1970-01-01T00:00:03+00:00" end test "to_epoch_time" do - %Record{data: data} = epoch_time(3) |> to_epoch_time |> run + {:ok, %Record{data: data}} = epoch_time(3) |> to_epoch_time |> run assert data == 3 end diff --git a/test/query/document_manipulation_test.exs b/test/query/document_manipulation_test.exs index 2d8478e..4572e2f 100644 --- a/test/query/document_manipulation_test.exs +++ b/test/query/document_manipulation_test.exs @@ -11,7 +11,7 @@ defmodule DocumentManipulationTest do end test "pluck" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ %{a: 5, b: 6, c: 3}, %{a: 7, b: 8} ] |> pluck(["a", "b"]) |> run @@ -22,7 +22,7 @@ defmodule DocumentManipulationTest do end test "without" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ %{a: 5, b: 6, c: 3}, %{a: 7, b: 8} ] |> without("a") |> run @@ -33,54 +33,59 @@ defmodule DocumentManipulationTest do end test "merge" do - %Record{data: data} = %{a: 4} |> merge(%{b: 5}) |> run + {:ok, %Record{data: data}} = %{a: 4} |> merge(%{b: 5}) |> run + assert data == %{"a" => 4, "b" => 5} + end + + test "merge list" do + {:ok, %Record{data: data}} = args([%{a: 4}, %{b: 5}]) |> merge |> run assert data == %{"a" => 4, "b" => 5} end test "append" do - %Record{data: data} = [1,2] |> append(3) |> run + {:ok, %Record{data: data}} = [1,2] |> append(3) |> run assert data == [1,2,3] end test "prepend" do - %Record{data: data} = [1,2] |> prepend(3) |> run + {:ok, %Record{data: data}} = [1,2] |> prepend(3) |> run assert data == [3,1,2] end test "difference" do - %Record{data: data} = [1,2] |> difference([2]) |> run + {:ok, %Record{data: data}} = [1,2] |> difference([2]) |> run assert data == [1] end test "set_insert" do - %Record{data: data} = [1,2] |> set_insert(2) |> run + {:ok, %Record{data: data}} = [1,2] |> set_insert(2) |> run assert data == [1,2] - %Record{data: data} = [1,2] |> set_insert(3) |> run + {:ok, %Record{data: data}} = [1,2] |> set_insert(3) |> run assert data == [1,2,3] end test "set_intersection" do - %Record{data: data} = [1,2] |> set_intersection([2,3]) |> run + {:ok, %Record{data: data}} = [1,2] |> set_intersection([2,3]) |> run assert data == [2] end test "set_union" do - %Record{data: data} = [1,2] |> set_union([2,3]) |> run + {:ok, %Record{data: data}} = [1,2] |> set_union([2,3]) |> run assert data == [1,2,3] end test "set_difference" do - %Record{data: data} = [1,2,4] |> set_difference([2,3]) |> run + {:ok, %Record{data: data}} = [1,2,4] |> set_difference([2,3]) |> run assert data == [1,4] end test "get_field" do - %Record{data: data} = %{a: 5, b: 6} |> get_field("a") |> run + {:ok, %Record{data: data}} = %{a: 5, b: 6} |> get_field("a") |> run assert data == 5 end test "has_fields" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ %{"b" => 6, "c" => 3}, %{"b" => 8} ] |> has_fields(["c"]) |> run @@ -88,39 +93,39 @@ defmodule DocumentManipulationTest do end test "insert_at" do - %Record{data: data} = [1,2,3] |> insert_at(1, 5) |> run + {:ok, %Record{data: data}} = [1,2,3] |> insert_at(1, 5) |> run assert data == [1,5,2,3] end test "splice_at" do - %Record{data: data} = [1,2,3] |> splice_at(1, [5,6]) |> run + {:ok, %Record{data: data}} = [1,2,3] |> splice_at(1, [5,6]) |> run assert data == [1,5,6,2,3] end test "delete_at" do - %Record{data: data} = [1,2,3,4] |> delete_at(1) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> delete_at(1) |> run assert data == [1,3,4] - %Record{data: data} = [1,2,3,4] |> delete_at(1,3) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> delete_at(1,3) |> run assert data == [1,4] end test "change_at" do - %Record{data: data} = [1,2,3,4] |> change_at(1,7) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> change_at(1,7) |> run assert data == [1,7,3,4] end test "keys" do - %Record{data: data} = %{a: 5, b: 6} |> keys |> run + {:ok, %Record{data: data}} = %{a: 5, b: 6} |> keys |> run assert data == ["a", "b"] end test "values" do - %Record{data: data} = %{a: 5, b: 6} |> values |> run + {:ok, %Record{data: data}} = %{a: 5, b: 6} |> values |> run assert data == [5, 6] end test "literal" do - %Record{data: data} = %{ + {:ok, %Record{data: data}} = %{ a: 5, b: %{ c: 6 @@ -135,7 +140,7 @@ defmodule DocumentManipulationTest do end test "object" do - %Record{data: data} = object(["a", 1, "b", 2]) |> run + {:ok, %Record{data: data}} = object(["a", 1, "b", 2]) |> run assert data == %{"a" => 1, "b" => 2} end end diff --git a/test/query/geospatial_adv_test.exs b/test/query/geospatial_adv_test.exs index e74dfca..bc09c62 100644 --- a/test/query/geospatial_adv_test.exs +++ b/test/query/geospatial_adv_test.exs @@ -30,7 +30,7 @@ defmodule GeospatialAdvTest do table(@table_name) |> insert( %{location: point(0.001,0)} ) |> run - %{data: data} = table(@table_name) |> get_intersecting( + {:ok, %{data: data}} = table(@table_name) |> get_intersecting( circle({0,0}, 5000), index: "location" ) |> run points = for x <- data, do: x["location"].coordinates @@ -44,7 +44,7 @@ defmodule GeospatialAdvTest do table(@table_name) |> insert( %{location: point(0.001,0)} ) |> run - %Record{data: data} = table(@table_name) |> get_nearest( + {:ok, %Record{data: data}} = table(@table_name) |> get_nearest( point({0,0}), index: "location", max_dist: 5000000 ) |> run assert Enum.count(data) == 2 diff --git a/test/query/geospatial_test.exs b/test/query/geospatial_test.exs index 27226df..d5bc5c3 100644 --- a/test/query/geospatial_test.exs +++ b/test/query/geospatial_test.exs @@ -14,55 +14,64 @@ defmodule GeospatialTest do end test "circle" do - %Record{data: data} = circle({1,1}, 5) |> run - assert %Polygon{outer_coordinates: [_h | _t], inner_coordinates: []} = data + {:ok, %Record{data: data}} = circle({1,1}, 5) |> run + assert %Polygon{coordinates: [_h | []]} = data end test "circle with opts" do - %Record{data: data} = circle({1,1}, 5, num_vertices: 100, fill: true) |> run - assert %Polygon{outer_coordinates: [_h | _t], inner_coordinates: []} = data + {:ok, %Record{data: data}} = circle({1,1}, 5, num_vertices: 100, fill: true) |> run + assert %Polygon{coordinates: [_h |[]]} = data end - test "distance" do - %Record{data: data} = distance(point({1,1}), point({2,2})) |> run + {:ok, %Record{data: data}} = distance(point({1,1}), point({2,2})) |> run assert data == 156876.14940188665 end - + test "fill" do - %Record{data: data} = fill(line([{1,1}, {4,5}, {2,2}, {1,1}])) |> run - assert data == %Polygon{outer_coordinates: [{1,1}, {4,5}, {2,2}, {1,1}]} + {:ok, %Record{data: data}} = fill(line([{1,1}, {4,5}, {2,2}, {1,1}])) |> run + assert data == %Polygon{coordinates: [[{1,1}, {4,5}, {2,2}, {1,1}]]} end test "geojson" do - %Record{data: data} = geojson(%{coordinates: [1,1], type: "Point"}) |> run + {:ok, %Record{data: data}} = geojson(%{coordinates: [1,1], type: "Point"}) |> run assert data == %Point{coordinates: {1,1}} end + test "geojson with holes" do + coords = [ square(0,0,10), square(1,1,1), square(4,4,1) ] + {:ok, %Record{data: data}} = geojson(%{type: "Polygon", coordinates: coords}) |> run + assert data == %Polygon{coordinates: coords} + end + + defp square(x,y,s) do + [{x,y}, {x+s,y}, {x+s,y+s}, {x,y+s}, {x,y}] + end + test "to_geojson" do - %Record{data: data} = point({1,1}) |> to_geojson |> run + {:ok, %Record{data: data}} = point({1,1}) |> to_geojson |> run assert data == %{"type" => "Point", "coordinates" => [1,1]} end # TODO: get_intersecting, get_nearest, includes, intersects test "point" do - %Record{data: data} = point({1,1}) |> run + {:ok, %Record{data: data}} = point({1,1}) |> run assert data == %Point{coordinates: {1, 1}} end test "line" do - %Record{data: data} = line([{1,1}, {4,5}]) |> run + {:ok, %Record{data: data}} = line([{1,1}, {4,5}]) |> run assert data == %Line{coordinates: [{1, 1}, {4,5}]} end test "includes" do - %Record{data: data} = [circle({0,0}, 1000), circle({0.001,0}, 1000), circle({100,100}, 1)] |> includes( + {:ok, %Record{data: data}} = [circle({0,0}, 1000), circle({0.001,0}, 1000), circle({100,100}, 1)] |> includes( point(0,0) ) |> run assert Enum.count(data) == 2 - %Record{data: data} = circle({0,0}, 1000) |> includes(point(0,0)) |> run + {:ok, %Record{data: data}} = circle({0,0}, 1000) |> includes(point(0,0)) |> run assert data == true - %Record{data: data} = circle({0,0}, 1000) |> includes(point(80,80)) |> run + {:ok, %Record{data: data}} = circle({0,0}, 1000) |> includes(point(80,80)) |> run assert data == false end @@ -72,24 +81,23 @@ defmodule GeospatialTest do ] |> intersects( circle({0,0}, 10) ) - %Record{data: data} = b |> run + {:ok, %Record{data: data}} = b |> run assert Enum.count(data) == 2 - %Record{data: data} = circle({0,0}, 1000) |> intersects(circle({0,0}, 1)) |> run + {:ok, %Record{data: data}} = circle({0,0}, 1000) |> intersects(circle({0,0}, 1)) |> run assert data == true - %Record{data: data} = circle({0,0}, 1000) |> intersects(circle({80,80}, 1)) |> run + {:ok, %Record{data: data}} = circle({0,0}, 1000) |> intersects(circle({80,80}, 1)) |> run assert data == false end test "polygon" do - %Record{data: data} = polygon([{0,0}, {0,1}, {1,1}, {1,0}]) |> run - assert data.outer_coordinates == [{0,0}, {0,1}, {1,1}, {1,0}, {0,0}] + {:ok, %Record{data: data}} = polygon([{0,0}, {0,1}, {1,1}, {1,0}]) |> run + assert data.coordinates == [[{0,0}, {0,1}, {1,1}, {1,0}, {0,0}]] end test "polygon_sub" do p1 = polygon([{0,0}, {0,1}, {1,1}, {1,0}]) p2 = polygon([{0.25,0.25}, {0.25,0.5}, {0.5,0.5}, {0.5,0.25}]) - %Record{data: data} = p1 |> polygon_sub(p2) |> run - assert data.outer_coordinates == [{0,0}, {0,1}, {1,1}, {1,0}, {0,0}] - assert data.inner_coordinates == [{0.25,0.25}, {0.25,0.5}, {0.5,0.5}, {0.5,0.25}, {0.25,0.25}] + {:ok, %Record{data: data}} = p1 |> polygon_sub(p2) |> run + assert data.coordinates == [[{0,0}, {0,1}, {1,1}, {1,0}, {0,0}], [{0.25,0.25}, {0.25,0.5}, {0.5,0.5}, {0.5,0.25}, {0.25,0.25}]] end end diff --git a/test/query/joins_test.exs b/test/query/joins_test.exs index a7347ca..6b28e1a 100644 --- a/test/query/joins_test.exs +++ b/test/query/joins_test.exs @@ -31,10 +31,10 @@ defmodule JoinsTest do q = inner_join(left, right,lambda fn l, r -> l[:a] == r[:a] end) - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == [%{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, %{"left" => %{"a" => 2, "b" => 3}, "right" => %{"a" => 2, "c" => 6}}] - %Record{data: data} = q |> zip |> run + {:ok, %Record{data: data}} = q |> zip |> run assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3, "c" => 6}] end @@ -44,10 +44,10 @@ defmodule JoinsTest do q = outer_join(left, right, lambda fn l, r -> l[:a] == r[:a] end) - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == [%{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, %{"left" => %{"a" => 2, "b" => 3}}] - %Record{data: data} = q |> zip |> run + {:ok, %Record{data: data}} = q |> zip |> run assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3}] end @@ -57,8 +57,8 @@ defmodule JoinsTest do table("test_1") |> insert([%{id: 3, a: 1, b: 2}, %{id: 2, a: 2, b: 3}]) |> run table("test_2") |> insert([%{id: 1, c: 4}]) |> run q = eq_join(table("test_1"), :a, table("test_2"), index: :id) - %Collection{data: data} = run q - %Collection{data: data2} = q |> zip |> run + {:ok, %Collection{data: data}} = run q + {:ok, %Collection{data: data2}} = q |> zip |> run table_drop("test_1") |> run table_drop("test_2") |> run assert data == [ diff --git a/test/query/math_logic_test.exs b/test/query/math_logic_test.exs index 2f01f80..505a03a 100644 --- a/test/query/math_logic_test.exs +++ b/test/query/math_logic_test.exs @@ -11,214 +11,214 @@ defmodule MathLogicTest do end test "add scalars" do - %Record{data: data} = add(1,2) |> run + {:ok, %Record{data: data}} = add(1,2) |> run assert data == 3 end test "add list of scalars" do - %Record{data: data} = add([1,2]) |> run + {:ok, %Record{data: data}} = add([1,2]) |> run assert data == 3 end test "concatenate two strings" do - %Record{data: data} = add("hello ","world") |> run + {:ok, %Record{data: data}} = add("hello ","world") |> run assert data == "hello world" end test "concatenate list of strings" do - %Record{data: data} = add(["hello", " ", "world"]) |> run + {:ok, %Record{data: data}} = add(["hello", " ", "world"]) |> run assert data == "hello world" end test "concatenate two arrays" do - %Record{data: data} = add([1,2,3],[3,4,5]) |> run + {:ok, %Record{data: data}} = add([1,2,3],[3,4,5]) |> run assert data == [1,2,3,3,4,5] end test "concatenate list of arrays" do - %Record{data: data} = add([[1,2,3],[3,4,5],[5,6,7]]) |> run + {:ok, %Record{data: data}} = add([[1,2,3],[3,4,5],[5,6,7]]) |> run assert data == [1,2,3,3,4,5,5,6,7] end test "subtract two numbers" do - %Record{data: data} = sub(5,2) |> run + {:ok, %Record{data: data}} = sub(5,2) |> run assert data == 3 end test "subtract list of numbers" do - %Record{data: data} = sub([9,3,1]) |> run + {:ok, %Record{data: data}} = sub([9,3,1]) |> run assert data == 5 end test "multiply two numbers" do - %Record{data: data} = mul(5,2) |> run + {:ok, %Record{data: data}} = mul(5,2) |> run assert data == 10 end test "multiply list of numbers" do - %Record{data: data} = mul([1,2,3,4,5]) |> run + {:ok, %Record{data: data}} = mul([1,2,3,4,5]) |> run assert data == 120 end test "create periodic array" do - %Record{data: data} = mul(3, [1,2]) |> run + {:ok, %Record{data: data}} = mul(3, [1,2]) |> run assert data == [1,2,1,2,1,2] end test "divide two numbers" do - %Record{data: data} = divide(6, 3) |> run + {:ok, %Record{data: data}} = divide(6, 3) |> run assert data == 2 end test "divide list of numbers" do - %Record{data: data} = divide([12,3,2]) |> run + {:ok, %Record{data: data}} = divide([12,3,2]) |> run assert data == 2 end test "find remainder when dividing two numbers" do - %Record{data: data} = mod(23, 4) |> run + {:ok, %Record{data: data}} = mod(23, 4) |> run assert data == 3 end test "logical and of two values" do - %Record{data: data} = and_r(true, true) |> run + {:ok, %Record{data: data}} = and_r(true, true) |> run assert data == true end test "logical and of list" do - %Record{data: data} = and_r([true, true, false]) |> run + {:ok, %Record{data: data}} = and_r([true, true, false]) |> run assert data == false end test "logical or of two values" do - %Record{data: data} = or_r(true, false) |> run + {:ok, %Record{data: data}} = or_r(true, false) |> run assert data == true end test "logical or of list" do - %Record{data: data} = or_r([false, false, false]) |> run + {:ok, %Record{data: data}} = or_r([false, false, false]) |> run assert data == false end test "two numbers are equal" do - %Record{data: data} = eq(1, 1) |> run + {:ok, %Record{data: data}} = eq(1, 1) |> run assert data == true - %Record{data: data} = eq(2, 1) |> run + {:ok, %Record{data: data}} = eq(2, 1) |> run assert data == false end test "values in a list are equal" do - %Record{data: data} = eq([1, 1, 1]) |> run + {:ok, %Record{data: data}} = eq([1, 1, 1]) |> run assert data == true - %Record{data: data} = eq([1, 2, 1]) |> run + {:ok, %Record{data: data}} = eq([1, 2, 1]) |> run assert data == false end test "two numbers are not equal" do - %Record{data: data} = ne(1, 1) |> run + {:ok, %Record{data: data}} = ne(1, 1) |> run assert data == false - %Record{data: data} = ne(2, 1) |> run + {:ok, %Record{data: data}} = ne(2, 1) |> run assert data == true end test "values in a list are not equal" do - %Record{data: data} = ne([1, 1, 1]) |> run + {:ok, %Record{data: data}} = ne([1, 1, 1]) |> run assert data == false - %Record{data: data} = ne([1, 2, 1]) |> run + {:ok, %Record{data: data}} = ne([1, 2, 1]) |> run assert data == true end test "a number is less than the other" do - %Record{data: data} = lt(2, 1) |> run + {:ok, %Record{data: data}} = lt(2, 1) |> run assert data == false - %Record{data: data} = lt(1, 2) |> run + {:ok, %Record{data: data}} = lt(1, 2) |> run assert data == true end test "values in a list less than the next" do - %Record{data: data} = lt([1, 4, 2]) |> run + {:ok, %Record{data: data}} = lt([1, 4, 2]) |> run assert data == false - %Record{data: data} = lt([1, 4, 5]) |> run + {:ok, %Record{data: data}} = lt([1, 4, 5]) |> run assert data == true end test "a number is less than or equal to the other" do - %Record{data: data} = le(1, 1) |> run + {:ok, %Record{data: data}} = le(1, 1) |> run assert data == true - %Record{data: data} = le(1, 2) |> run + {:ok, %Record{data: data}} = le(1, 2) |> run assert data == true end test "values in a list less than or equal to the next" do - %Record{data: data} = le([1, 4, 2]) |> run + {:ok, %Record{data: data}} = le([1, 4, 2]) |> run assert data == false - %Record{data: data} = le([1, 4, 4]) |> run + {:ok, %Record{data: data}} = le([1, 4, 4]) |> run assert data == true end test "a number is greater than the other" do - %Record{data: data} = gt(1, 1) |> run + {:ok, %Record{data: data}} = gt(1, 1) |> run assert data == false - %Record{data: data} = gt(2, 1) |> run + {:ok, %Record{data: data}} = gt(2, 1) |> run assert data == true end test "values in a list greater than the next" do - %Record{data: data} = gt([1, 4, 2]) |> run + {:ok, %Record{data: data}} = gt([1, 4, 2]) |> run assert data == false - %Record{data: data} = gt([10, 4, 2]) |> run + {:ok, %Record{data: data}} = gt([10, 4, 2]) |> run assert data == true end test "a number is greater than or equal to the other" do - %Record{data: data} = ge(1, 1) |> run + {:ok, %Record{data: data}} = ge(1, 1) |> run assert data == true - %Record{data: data} = ge(2, 1) |> run + {:ok, %Record{data: data}} = ge(2, 1) |> run assert data == true end test "values in a list greater than or equal to the next" do - %Record{data: data} = ge([1, 4, 2]) |> run + {:ok, %Record{data: data}} = ge([1, 4, 2]) |> run assert data == false - %Record{data: data} = ge([10, 4, 4]) |> run + {:ok, %Record{data: data}} = ge([10, 4, 4]) |> run assert data == true end test "not operator" do - %Record{data: data} = not_r(true) |> run + {:ok, %Record{data: data}} = not_r(true) |> run assert data == false end test "random operator" do - %Record{data: data} = random |> run + {:ok, %Record{data: data}} = random |> run assert data >= 0.0 && data <= 1.0 - %Record{data: data} = random(100) |> run + {:ok, %Record{data: data}} = random(100) |> run assert is_integer(data) && data >= 0 && data <= 100 - %Record{data: data} = random(100.0) |> run + {:ok, %Record{data: data}} = random(100.0) |> run assert is_float(data) && data >= 0.0 && data <= 100.0 - %Record{data: data} = random(50, 100) |> run + {:ok, %Record{data: data}} = random(50, 100) |> run assert is_integer(data) && data >= 50 && data <= 100 - %Record{data: data} = random(50, 100.0) |> run + {:ok, %Record{data: data}} = random(50, 100.0) |> run assert is_float(data) && data >= 50.0 && data <= 100.0 end test "round" do - %Record{data: data} = round_r(0.3) |> run + {:ok, %Record{data: data}} = round_r(0.3) |> run assert data == 0 - %Record{data: data} = round_r(0.6) |> run + {:ok, %Record{data: data}} = round_r(0.6) |> run assert data == 1 end test "ceil" do - %Record{data: data} = ceil(0.3) |> run + {:ok, %Record{data: data}} = ceil(0.3) |> run assert data == 1 - %Record{data: data} = ceil(0.6) |> run + {:ok, %Record{data: data}} = ceil(0.6) |> run assert data == 1 end test "floor" do - %Record{data: data} = floor(0.3) |> run + {:ok, %Record{data: data}} = floor(0.3) |> run assert data == 0 - %Record{data: data} = floor(0.6) |> run + {:ok, %Record{data: data}} = floor(0.6) |> run assert data == 0 end end diff --git a/test/query/selection_test.exs b/test/query/selection_test.exs index 1c007c9..922b0eb 100644 --- a/test/query/selection_test.exs +++ b/test/query/selection_test.exs @@ -26,14 +26,14 @@ defmodule SelectionTest do test "get" do table(@table_name) |> insert(%{id: "a", a: 5}) |> run - %Record{data: data} = table(@table_name) |> get("a") |> run + {:ok, %Record{data: data}} = table(@table_name) |> get("a") |> run assert data == %{"a" => 5, "id" => "a"} end test "get all" do table(@table_name) |> insert(%{id: "a", a: 5}) |> run table(@table_name) |> insert(%{id: "b", a: 5}) |> run - data = table(@table_name) |> get_all(["a", "b"]) |> run + {:ok, data} = table(@table_name) |> get_all(["a", "b"]) |> run assert Enum.sort(Enum.to_list(data)) == [ %{"a" => 5, "id" => "a"}, %{"a" => 5, "id" => "b"} @@ -45,7 +45,7 @@ defmodule SelectionTest do table(@table_name) |> insert(%{id: "b", other_id: "d"}) |> run table(@table_name) |> index_create("other_id") |> run table(@table_name) |> index_wait("other_id") |> run - data = table(@table_name) |> get_all(["c", "d"], index: "other_id") |> run + {:ok, data} = table(@table_name) |> get_all(["c", "d"], index: "other_id") |> run assert Enum.sort(Enum.to_list(data)) == [ %{"id" => "a", "other_id" => "c"}, %{"id" => "b", "other_id" => "d"} @@ -53,7 +53,7 @@ defmodule SelectionTest do end test "get all should be able to accept an empty list" do - result = table(@table_name) |> get_all([]) |> run + {:ok, result} = table(@table_name) |> get_all([]) |> run assert result.data == [] end @@ -61,9 +61,9 @@ defmodule SelectionTest do table(@table_name) |> insert(%{id: "a", a: 5}) |> run table(@table_name) |> insert(%{id: "b", a: 5}) |> run table(@table_name) |> insert(%{id: "c", a: 5}) |> run - %RethinkDB.Collection{data: data} = table(@table_name) |> between("b", "d") |> run + {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> between("b", "d") |> run assert Enum.count(data) == 2 - %RethinkDB.Collection{data: data} = table(@table_name) |> between(minval, maxval) |> run + {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> between(minval, maxval) |> run assert Enum.count(data) == 3 end @@ -71,9 +71,9 @@ defmodule SelectionTest do table(@table_name) |> insert(%{id: "a", a: 5}) |> run table(@table_name) |> insert(%{id: "b", a: 5}) |> run table(@table_name) |> insert(%{id: "c", a: 6}) |> run - %RethinkDB.Collection{data: data} = table(@table_name) |> filter(%{a: 6}) |> run + {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> filter(%{a: 6}) |> run assert Enum.count(data) == 1 - %RethinkDB.Collection{data: data} = table(@table_name) |> filter( + {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> filter( lambda fn (x) -> x["a"] == 5 end) |> run diff --git a/test/query/string_manipulation_test.exs b/test/query/string_manipulation_test.exs index 9d04de1..7eec1f3 100644 --- a/test/query/string_manipulation_test.exs +++ b/test/query/string_manipulation_test.exs @@ -11,30 +11,30 @@ defmodule StringManipulationTest do end test "match a string", context do - %Record{data: data} = "hello world" |> match("hello") |> run(context.conn) + {:ok, %Record{data: data}} = "hello world" |> match("hello") |> run(context.conn) assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} end test "match a regex", context do - %Record{data: data} = "hello world" |> match(~r(hello)) |> run(context.conn) + {:ok, %Record{data: data}} = "hello world" |> match(~r(hello)) |> run(context.conn) assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} end test "split a string", context do - %Record{data: data} = "abracadabra" |> split |> run(context.conn) + {:ok, %Record{data: data}} = "abracadabra" |> split |> run(context.conn) assert data == ["abracadabra"] - %Record{data: data} = "abra-cadabra" |> split("-") |> run(context.conn) + {:ok, %Record{data: data}} = "abra-cadabra" |> split("-") |> run(context.conn) assert data == ["abra", "cadabra"] - %Record{data: data} = "a-bra-ca-da-bra" |> split("-", 2) |> run(context.conn) + {:ok, %Record{data: data}} = "a-bra-ca-da-bra" |> split("-", 2) |> run(context.conn) assert data == ["a", "bra", "ca-da-bra"] end test "upcase", context do - %Record{data: data} = "hi" |> upcase |> run(context.conn) + {:ok, %Record{data: data}} = "hi" |> upcase |> run(context.conn) assert data == "HI" end test "downcase", context do - %Record{data: data} = "Hi" |> downcase |> run(context.conn) + {:ok, %Record{data: data}} = "Hi" |> downcase |> run(context.conn) assert data == "hi" end end diff --git a/test/query/table_db_test.exs b/test/query/table_db_test.exs index 5d8962f..4558a9e 100644 --- a/test/query/table_db_test.exs +++ b/test/query/table_db_test.exs @@ -20,21 +20,21 @@ defmodule TableDBTest do end q = db(@db_name) |> table_create(@table_name) - %Record{data: %{"tables_created" => 1}} = run q + {:ok, %Record{data: %{"tables_created" => 1}}} = run q q = db(@db_name) |> table_list - %Record{data: tables} = run q + {:ok, %Record{data: tables}} = run q assert Enum.member?(tables, @table_name) q = db(@db_name) |> table_drop(@table_name) - %Record{data: %{"tables_dropped" => 1}} = run q + {:ok, %Record{data: %{"tables_dropped" => 1}}} = run q q = db(@db_name) |> table_list - %Record{data: tables} = run q + {:ok, %Record{data: tables}} = run q assert !Enum.member?(tables, @table_name) q = db(@db_name) |> table_create(@table_name, primary_key: "not_id") - %Record{data: result} = run q + {:ok, %Record{data: result}} = run q %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result assert primary_key == "not_id" end diff --git a/test/query/table_index_test.exs b/test/query/table_index_test.exs index e20130e..0e5c790 100644 --- a/test/query/table_index_test.exs +++ b/test/query/table_index_test.exs @@ -19,25 +19,25 @@ defmodule TableIndexTest do end test "indexes" do - %Record{data: data} = table(@table_name) |> index_create("hello") |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_create("hello") |> run assert data == %{"created" => 1} - %Record{data: data} = table(@table_name) |> index_wait("hello") |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_wait("hello") |> run assert [ %{"function" => _, "geo" => false, "index" => "hello", "multi" => false, "outdated" => false,"ready" => true} ] = data - %Record{data: data} = table(@table_name) |> index_status("hello") |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_status("hello") |> run assert [ %{"function" => _, "geo" => false, "index" => "hello", "multi" => false, "outdated" => false,"ready" => true} ] = data - %Record{data: data} = table(@table_name) |> index_list |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run assert data == ["hello"] table(@table_name) |> index_rename("hello", "goodbye") |> run - %Record{data: data} = table(@table_name) |> index_list |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run assert data == ["goodbye"] table(@table_name) |> index_drop("goodbye") |> run - %Record{data: data} = table(@table_name) |> index_list |> run + {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run assert data == [] end end diff --git a/test/query/table_test.exs b/test/query/table_test.exs index 4f0e421..25aec55 100644 --- a/test/query/table_test.exs +++ b/test/query/table_test.exs @@ -17,21 +17,21 @@ defmodule TableTest do table_drop(@table_name) |> run end q = table_create(@table_name) - %Record{data: %{"tables_created" => 1}} = run q + {:ok, %Record{data: %{"tables_created" => 1}}} = run q q = table_list - %Record{data: tables} = run q + {:ok, %Record{data: tables}} = run q assert Enum.member?(tables, @table_name) q = table_drop(@table_name) - %Record{data: %{"tables_dropped" => 1}} = run q + {:ok, %Record{data: %{"tables_dropped" => 1}}} = run q q = table_list - %Record{data: tables} = run q + {:ok, %Record{data: tables}} = run q assert !Enum.member?(tables, @table_name) q = table_create(@table_name, primary_key: "not_id") - %Record{data: result} = run q + {:ok, %Record{data: result}} = run q %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result assert primary_key == "not_id" end diff --git a/test/query/transformation_test.exs b/test/query/transformation_test.exs index fbf70a6..3d6d38d 100644 --- a/test/query/transformation_test.exs +++ b/test/query/transformation_test.exs @@ -15,12 +15,12 @@ defmodule TransformationTest do end test "map" do - %Record{data: data} = map([1,2,3], lambda &(&1 + 1)) |> run + {:ok, %Record{data: data}} = map([1,2,3], lambda &(&1 + 1)) |> run assert data == [2,3,4] end test "with_fields" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ %{a: 5}, %{a: 6}, %{a: 7, b: 8} @@ -29,7 +29,7 @@ defmodule TransformationTest do end test "flat_map" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ [1,2,3], [4,5,6], [7,8,9] @@ -40,7 +40,7 @@ defmodule TransformationTest do end test "order_by" do - %Record{data: data} = [ + {:ok, %Record{data: data}} = [ %{a: 1}, %{a: 7}, %{a: 4}, @@ -60,45 +60,45 @@ defmodule TransformationTest do data = [%{"rank" => 1}, %{"rank" => 2}, %{"rank" => 3}] q = data |> order_by(desc("rank")) - %{data: result} = run(q) + {:ok, %{data: result}} = run(q) assert result == Enum.reverse(data) end test "skip" do - %Record{data: data} = [1,2,3,4] |> skip(2) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> skip(2) |> run assert data == [3,4] end test "limit" do - %Record{data: data} = [1,2,3,4] |> limit(2) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> limit(2) |> run assert data == [1,2] end test "slice" do - %Record{data: data} = [1,2,3,4] |> slice(1,3) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> slice(1,3) |> run assert data == [2,3] end test "nth" do - %Record{data: data} = [1,2,3,4] |> nth(2) |> run + {:ok, %Record{data: data}} = [1,2,3,4] |> nth(2) |> run assert data == 3 end test "offsets_of" do - %Record{data: data} = [1,2,3,1,4,1] |> offsets_of(1) |> run + {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> offsets_of(1) |> run assert data == [0,3,5] end test "is_empty" do - %Record{data: data} = [] |> is_empty |> run + {:ok, %Record{data: data}} = [] |> is_empty |> run assert data == true - %Record{data: data} = [1,2,3,1,4,1] |> is_empty |> run + {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> is_empty |> run assert data == false end test "sample" do - %Record{data: data} = [1,2,3,1,4,1] |> sample(2) |> run + {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> sample(2) |> run assert Enum.count(data) == 2 end end diff --git a/test/query/writing_data_test.exs b/test/query/writing_data_test.exs index 320c1b9..1a37d3b 100644 --- a/test/query/writing_data_test.exs +++ b/test/query/writing_data_test.exs @@ -25,51 +25,74 @@ defmodule WritingDataTest do test "insert" do table_query = table(@table_name) q = insert(table_query, %{name: "Hello", attr: "World"}) - %Record{data: %{"inserted" => 1, "generated_keys" => [key]}} = run(q) + {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) - %Collection{data: [%{"id" => ^key, "name" => "Hello", "attr" => "World"}]} = run(table_query) + {:ok, %Collection{data: [%{"id" => ^key, "name" => "Hello", "attr" => "World"}]}} = run(table_query) end test "insert multiple" do table_query = table(@table_name) q = insert(table_query, [%{name: "Hello"}, %{name: "World"}]) - %Record{data: %{"inserted" => 2}} = run(q) + {:ok, %Record{data: %{"inserted" => 2}}} = run(q) - %Collection{data: data} = run(table_query) + {:ok, %Collection{data: data}} = run(table_query) assert Enum.map(data, &(&1["name"])) |> Enum.sort == ["Hello", "World"] end + test "insert conflict options" do + table_query = table(@table_name) + + q = insert(table_query, [%{name: "Hello", value: 1}]) + {:ok, %Record{data: %{"generated_keys"=> [id], "inserted" => 1}}} = run(q) + + q = insert(table_query, [%{name: "Hello", id: id, value: 2}]) + {:ok, %Record{data: %{"errors" => 1}}} = run(q) + + q = insert(table_query, [%{name: "World", id: id, value: 2}], %{conflict: "replace"}) + {:ok, %Record{data: %{"replaced" => 1}}} = run(q) + {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 2}]}} = run(table_query) + + q = insert(table_query, [%{id: id, value: 3}], %{conflict: "update"}) + {:ok, %Record{data: %{"replaced" => 1}}} = run(q) + {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 3}]}} = run(table_query) + + q = insert(table_query, [%{id: id, value: 3}], %{conflict: fn(_id, old, new) -> + merge(old, %{value: add(get_field(old, "value"), get_field(new, "value"))}) end}) + {:ok, %Record{data: %{"replaced" => 1}}} = run(q) + {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 6}]}} = run(table_query) + end + test "update" do table_query = table(@table_name) q = insert(table_query, %{name: "Hello", attr: "World"}) - %Record{data: %{"inserted" => 1, "generated_keys" => [key]}} = run(q) + {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) record_query = table_query |> get(key) q = record_query |> update(%{name: "Hi"}) run q q = record_query - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == %{"id" => key, "name" => "Hi", "attr" => "World"} end test "replace" do table_query = table(@table_name) q = insert(table_query, %{name: "Hello", attr: "World"}) - %Record{data: %{"inserted" => 1, "generated_keys" => [key]}} = run(q) + {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) record_query = table_query |> get(key) q = record_query |> replace(%{id: key, name: "Hi"}) run q q = record_query - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == %{"id" => key, "name" => "Hi"} end test "sync" do table_query = table(@table_name) q = table_query |> sync - %Record{data: data} = run q + {:ok, %Record{data: data}} = run q assert data == %{"synced" => 1} end diff --git a/test/query_test.exs b/test/query_test.exs index a8146fe..42f4389 100644 --- a/test/query_test.exs +++ b/test/query_test.exs @@ -27,7 +27,7 @@ defmodule QueryTest do test "make_array" do array = [%{"name" => "hello"}, %{"name:" => "world"}] q = Query.make_array(array) - %Record{data: data} = run(q) + {:ok, %Record{data: data}} = run(q) assert Enum.sort(data) == Enum.sort(array) end @@ -36,7 +36,7 @@ defmodule QueryTest do insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run - %Collection{data: data} = table(@table_name) + {:ok, %Collection{data: data}} = table(@table_name) |> map( RethinkDB.Lambda.lambda fn (el) -> el[:name] + " " + "with map" end) |> run @@ -48,7 +48,7 @@ defmodule QueryTest do insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run - %Collection{data: data} = table(@table_name) + {:ok, %Collection{data: data}} = table(@table_name) |> filter(%{name: "Hello"}) |> run assert Enum.map(data, &(&1["name"])) == ["Hello"] @@ -59,7 +59,7 @@ defmodule QueryTest do insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run - %Collection{data: data} = table(@table_name) + {:ok, %Collection{data: data}} = table(@table_name) |> filter(RethinkDB.Lambda.lambda fn (el) -> el[:name] == "Hello" end) @@ -73,7 +73,7 @@ defmodule QueryTest do [x, y] end) end) - %{data: data} = run(a) + {:ok, %{data: data}} = run(a) assert data == [ [[1,4], [1,5], [1,6]], [[2,4], [2,5], [2,6]],