diff --git a/CHANGELOG.md b/CHANGELOG.md index aeaeda6..d6791d2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,78 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## v3.0.0 (2026-03-05) + +### Breaking Changes + +* **Money columns return `%Decimal{}`** instead of `float`. Use `Decimal` + arithmetic for money values. Previously `SELECT CAST(10.50 AS money)` + returned `10.5` (float); now returns `Decimal.new("10.5000")`. + +* **Date/time columns always return Elixir calendar structs.** No more tuple + format. `smalldatetime`, `datetime`, `datetime2` return `%NaiveDateTime{}`; + `datetimeoffset` returns `%DateTime{}`; `date` returns `%Date{}`; + `time` returns `%Time{}`. The `use_elixir_calendar_types` config option + is ignored — struct output is always on. + +* **`Tds.Types.UUID` deprecated.** Use `Ecto.UUID` instead. The new + `Tds.Type.UUID` wire handler performs MSSQL mixed-endian byte reordering + at the protocol level, so `Ecto.UUID` works directly. + +* **`Tds.Types` module removed.** The 1815-line monolithic type module has been + replaced by 12 focused handler modules under `Tds.Type.*`. + +### New Features + +* **`Tds.Type` behaviour** — Pluggable type system with 7 callbacks: + `type_codes/0`, `type_names/0`, `decode_metadata/1`, `decode/2`, + `encode/2`, `param_descriptor/2`, `infer/1`. + +* **12 handler modules** — `Tds.Type.{Boolean, Integer, Float, Decimal, Money, + String, Binary, DateTime, UUID, Xml, Variant, Udt}`. + +* **`Tds.Type.DataReader`** — Shared framing reader with 6 strategies + (`:fixed`, `:bytelen`, `:shortlen`, `:longlen`, `:plp`, `:variant`). + PLP uses iolist accumulation (8x faster than binary concat). + All strategies sever sub-binary references via `:binary.copy/1` to + prevent memory leaks. + +* **`Tds.Type.Registry`** — Per-connection type registry mapping TDS type + codes and atom names to handlers. Supports user-provided `extra_types` + that override built-in handlers. + +* **`extra_types` connection option** — Register custom type handlers at + connect time: `Tds.start_link(extra_types: [MyApp.GeographyType])`. + +### Performance + +Benchmarked on Apple M4 Pro, 48 GB, Elixir 1.18.1, Erlang/OTP 27.2: + +* Integer decode: **54% faster** (7.12M → 10.96M ips) +* Decimal encode (1000 params): **8.5x faster** (0.63K → 5.39K ips) +* Decimal encode memory: **8.8x less** (1.99 MB → 226 KB per 1000 params) +* PLP reassembly: **8x faster** at 1 MB, **7x faster** at 10 MB +* PLP memory: **~2x less** (iolist vs binary concat) + +### Improvements + +* Decimal encoding no longer mutates `Decimal.Context` in the process + dictionary. Precision and scale are passed via metadata. +* All decoded values sever sub-binary references to the TCP packet buffer, + preventing memory retention when values are stored in ETS or GenServer state. + +### Migration Guide + +1. **Money values**: Replace float arithmetic with `Decimal` operations. + `Decimal.to_float/1` is available if float is needed temporarily. +2. **Date/time tuples**: Replace `{{y,m,d},{h,min,s}}` with + `~N[2013-10-12 00:37:14]` or `NaiveDateTime.new!/3`. For encoding, + convert tuples to calendar structs before passing as parameters. +3. **Remove `use_elixir_calendar_types`**: Delete from your config — + calendar structs are now the only output format. +4. **`Tds.Types.UUID`**: Replace with `Ecto.UUID`. If you called + `Tds.generate_uuid/0`, use `Ecto.UUID.bingenerate/0` instead. + ## v2.3.5 (2024-01-23) ### Improvements * Removed unnecessary append of possibly large binaries for floats @@ -324,11 +396,7 @@ could not determine if binary is of uuid type, it interpreted such values as raw ### Enhancements * Added API for ATTN call -<<<<<<< HEAD -## v0.1.5 -======= ## v0.1.5 - 2015-02-19 ->>>>>>> 1007dc1 (Misc doc changes) ### Bug Fixes * Fixed issue where driver would not call Connection.next when setting the state to :ready * Fixed UCS2 Encoding diff --git a/README.md b/README.md index 465d5c9..a9fe420 100644 --- a/README.md +++ b/README.md @@ -8,8 +8,6 @@ MSSQL / TDS Database driver for Elixir. ### NOTE: Since TDS version 2.0, `tds_ecto` package is deprecated, this version supports `ecto_sql` since version 3.3.4. -Please check out the issues for a more complete overview. This branch should not be considered stable or ready for production yet. - For stable versions always use [hex.pm](https://hex.pm/packages/tds) as source for your mix.exs. ## Usage @@ -19,7 +17,7 @@ Add `:tds` as a dependency in your `mix.exs` file. ```elixir def deps do [ - {:tds, "~> 2.3"} + {:tds, "~> 3.0"} ] end ``` @@ -151,52 +149,77 @@ This functionality requires specific environment to be developed. ## Data representation -| TDS | Elixir | -| ----------------- | ------------------------------------------------------------------------------------------ | -| NULL | nil | -| bool | true / false | -| char | "é" | -| int | 42 | -| float | 42.0 | -| text | "text" | -| binary | <<42>> | -| numeric | #Decimal<42.0> | -| date | {2013, 10, 12} or %Date{} | -| time | {0, 37, 14} or {0, 37, 14, 123456} or %Time{} | -| smalldatetime | {{2013, 10, 12}, {0, 37, 14}} or {{2013, 10, 12}, {0, 37, 14, 123456}} | -| datetime | {{2013, 10, 12}, {0, 37, 14}} or {{2013, 10, 12}, {0, 37, 14, 123456}} or %NaiveDateTime{} | -| datetime2 | {{2013, 10, 12}, {0, 37, 14}} or {{2013, 10, 12}, {0, 37, 14, 123456}} or %NaiveDateTime{} | -| datetimeoffset(n) | {{2013, 10, 12}, {0, 37, 14}} or {{2013, 10, 12}, {0, 37, 14, 123456}} or %DateTime{} | -| uuid | <<160,238,188,153,156,11,78,248,187,109,107,185,189,56,10,17>> | - -Currently unsupported: [User-Defined Types](https://docs.microsoft.com/en-us/sql/relational-databases/clr-integration-database-objects-user-defined-types/working-with-user-defined-types-in-sql-server), XML +| TDS | Elixir | +| ----------------- | ---------------------- | +| NULL | `nil` | +| bool | `true` / `false` | +| char / varchar | `"text"` | +| nchar / nvarchar | `"text"` | +| int / bigint | `42` | +| float / real | `42.0` | +| text / ntext | `"text"` | +| binary / varbinary | `<<42>>` | +| numeric / decimal | `#Decimal<42.0>` | +| money / smallmoney | `#Decimal<10.5000>` | +| date | `%Date{}` | +| time | `%Time{}` | +| smalldatetime | `%NaiveDateTime{}` | +| datetime | `%NaiveDateTime{}` | +| datetime2 | `%NaiveDateTime{}` | +| datetimeoffset(n) | `%DateTime{}` | +| uniqueidentifier | `<<_::128>>` | +| xml | `"..."` | +| sql_variant | varies by inner type | + +User-Defined Types (UDT) are returned as raw binary by default. Register custom +handlers via `extra_types` to decode specific UDTs (see below). ### Dates and Times -Tds can work with dates and times in either a tuple format or as Elixir calendar types. Calendar types can be enabled in the config with `config :tds, opts: [use_elixir_calendar_types: true]`. - -**Tuple forms:** +As of v3.0, all date/time columns are decoded as Elixir calendar structs: -- Date: `{yr, mth, day}` -- Time: `{hr, min, sec}` or `{hr, min, sec, fractional_seconds}` -- DateTime: `{date, time}` -- DateTimeOffset: `{utc_date, utc_time, offset_mins}` +| SQL Type | Elixir Type | +| ------------------- | -------------------- | +| `date` | `%Date{}` | +| `time(n)` | `%Time{}` | +| `smalldatetime` | `%NaiveDateTime{}` | +| `datetime` | `%NaiveDateTime{}` | +| `datetime2(n)` | `%NaiveDateTime{}` | +| `datetimeoffset(n)` | `%DateTime{}` | -In SQL Server, the `fractional_seconds` of a `time`, `datetime2` or `datetimeoffset(n)` column can have a precision of 0-7, where the `microsecond` field of a `%Time{}` or `%DateTime{}` struct can have a precision of 0-6. +SQL Server `time`, `datetime2`, and `datetimeoffset` support precision 0-7. +Elixir's `microsecond` field supports precision 0-6, so fractional seconds +are truncated to microsecond precision when the SQL scale exceeds 6. -Note that the DateTimeOffset tuple expects the date and time in UTC and the offset in minutes. For example, `{{2020, 4, 5}, {5, 30, 59}, 600}` is equal to `'2020-04-05 15:30:59+10:00'`. +The `use_elixir_calendar_types` config option from v2.x is no longer needed +and is ignored in v3.0. ### UUIDs [MSSQL stores UUIDs in mixed-endian -format](https://dba.stackexchange.com/a/121878), and these mixed-endian UUIDs -are returned in [Tds.Result](https://hexdocs.pm/tds/Tds.Result.html). +format](https://dba.stackexchange.com/a/121878) where the first three groups +are byte-reversed (little-endian) and the last two are big-endian. + +As of v3.0, the `Tds.Type.UUID` wire handler performs this byte reordering +automatically at the protocol level, so `Ecto.UUID` works directly without +any wrapper module. + +`Tds.Types.UUID` is deprecated. Use `Ecto.UUID` for all UUID operations. + +### Custom Type Handlers -To convert a mixed-endian UUID binary to a big-endian string, use -[Tds.Types.UUID.load/1](https://hexdocs.pm/tds/Tds.Types.UUID.html#load/1) +Register custom type handlers via the `extra_types` connection option: + +```elixir +Tds.start_link( + hostname: "localhost", + extra_types: [MyApp.GeographyType] +) +``` -To convert a big-endian UUID string to a mixed-endian binary, use -[Tds.Types.UUID.dump/1](https://hexdocs.pm/tds/Tds.Types.UUID.html#dump/1) +Custom handlers implement the `Tds.Type` behaviour and can override built-in +handlers for the same type codes or names. See `Tds.Type` docs for the +callback specification. ## Contributing @@ -216,7 +239,7 @@ use it for the first time. The tests require an SQL Server database to be available on localhost. If you are not using Windows OS you can start sql server instance using Docker. -Official SQL Server Docker image can be found [here](https://hub.docker.com/r/microsoft/mssql-server-linux). +Official SQL Server Docker image can be found [here](https://hub.docker.com/r/microsoft/mssql-server). If you do not have specific requirements on how you would like to start sql server in docker, you can use script for this repo. diff --git a/bench/plp_bench.exs b/bench/plp_bench.exs new file mode 100644 index 0000000..5106ac0 --- /dev/null +++ b/bench/plp_bench.exs @@ -0,0 +1,83 @@ +# PLP chunk reassembly: current binary concat vs iolist accumulation. +# Run: mix run bench/plp_bench.exs +# +# Baseline results (2026-03-05) +# Machine: Apple M4 Pro, 48 GB, macOS +# Elixir 1.18.1, Erlang/OTP 27.2, JIT enabled +# +# Name ips average deviation median 99th % +# 1MB new (iolist) 43.15 K 23.17 us +-10.68% 23.38 us 29.21 us +# 1MB current (concat) 5.38 K 186.02 us +-21.73% 183.63 us 291.98 us +# 10MB new (iolist) 3.44 K 290.93 us +-21.21% 280.71 us 730.41 us +# 10MB current (concat) 0.49 K 2026.15 us +-6.40% 2018.46 us 2400.42 us +# +# Summary: iolist is 8x faster at 1MB and 7x faster at 10MB +# +# Memory usage: +# 1MB new (iolist) 18.13 KB +# 1MB current (concat) 35.44 KB - 1.96x +# 10MB new (iolist) 170.70 KB +# 10MB current (concat) 357.38 KB - 2.10x + +defmodule PLPBench do + # Current approach: buf <> :binary.copy(chunk) + def decode_plp_current(<<0::little-unsigned-32, _rest::binary>>, buf), + do: buf + + def decode_plp_current( + <>, + buf + ) do + decode_plp_current(rest, buf <> :binary.copy(chunk)) + end + + # New approach: iolist accumulation + def decode_plp_iolist(<<0::little-unsigned-32, _rest::binary>>, acc), + do: :lists.reverse(acc) |> IO.iodata_to_binary() + + def decode_plp_iolist( + <>, + acc + ) do + decode_plp_iolist(rest, [chunk | acc]) + end + + def build_plp_payload(total_size, chunk_size) do + chunk = :crypto.strong_rand_bytes(chunk_size) + num_chunks = div(total_size, chunk_size) + + chunks = + for _ <- 1..num_chunks, into: <<>> do + <> <> chunk + end + + chunks <> <<0::little-unsigned-32>> + end +end + +payload_1mb = PLPBench.build_plp_payload(1_048_576, 4096) +payload_10mb = PLPBench.build_plp_payload(10_485_760, 4096) + +Benchee.run( + %{ + "1MB current (concat)" => fn -> + PLPBench.decode_plp_current(payload_1mb, <<>>) + end, + "1MB new (iolist)" => fn -> + PLPBench.decode_plp_iolist(payload_1mb, []) + end, + "10MB current (concat)" => fn -> + PLPBench.decode_plp_current(payload_10mb, <<>>) + end, + "10MB new (iolist)" => fn -> + PLPBench.decode_plp_iolist(payload_10mb, []) + end + }, + warmup: 1, + time: 5, + memory_time: 2 +) diff --git a/bench/type_system_bench.exs b/bench/type_system_bench.exs new file mode 100644 index 0000000..61a3be9 --- /dev/null +++ b/bench/type_system_bench.exs @@ -0,0 +1,83 @@ +# Benchmarks for type system decode/encode throughput. +# Run: mix run bench/type_system_bench.exs +# +# Requires benchee: add {:benchee, "~> 1.3", only: :dev, runtime: false} +# to mix.exs deps if not present. + +alias Tds.Parameter +alias Tds.Type.{DataReader, Registry} + +defmodule TypeBench.Fixtures do + import Tds.Protocol.Constants + + @registry Tds.Type.Registry.new() + + def registry, do: @registry + + def integer_decode_input do + {:ok, handler} = + Registry.handler_for_code(@registry, tds_type(:int)) + + meta = %{data_reader: {:fixed, 4}, handler: handler} + data = <<42, 0, 0, 0, 0xFF>> + {meta, data} + end + + def string_decode_input do + value = String.duplicate("hello", 100) + ucs2 = Tds.Encoding.UCS2.from_string(value) + size = byte_size(ucs2) + + {:ok, handler} = + Registry.handler_for_code(@registry, tds_type(:nvarchar)) + + meta = %{ + data_reader: :shortlen, + collation: %Tds.Protocol.Collation{codepage: :RAW}, + encoding: :ucs2, + length: size, + handler: handler + } + + data = <> <> ucs2 <> <<0xFF>> + {meta, data} + end + + def decimal_encode_params do + for _ <- 1..1000 do + %Parameter{ + name: "@1", + value: Decimal.new("12345.6789"), + type: :decimal + } + end + end +end + +{int_meta, int_data} = TypeBench.Fixtures.integer_decode_input() +{str_meta, str_data} = TypeBench.Fixtures.string_decode_input() +params = TypeBench.Fixtures.decimal_encode_params() +registry = TypeBench.Fixtures.registry() + +Benchee.run( + %{ + "decode integer" => fn -> + {raw, _rest} = DataReader.read(int_meta.data_reader, int_data) + int_meta.handler.decode(raw, int_meta) + end, + "decode string" => fn -> + {raw, _rest} = DataReader.read(str_meta.data_reader, str_data) + str_meta.handler.decode(raw, str_meta) + end, + "encode 1000 decimal params" => fn -> + Enum.each(params, fn p -> + {:ok, handler} = Registry.handler_for_name(registry, p.type) + meta = %{type: p.type} + handler.encode(p.value, meta) + end) + end + }, + warmup: 2, + time: 5, + memory_time: 2 +) diff --git a/lib/tds.ex b/lib/tds.ex index caa0cc1..8db9768 100644 --- a/lib/tds.ex +++ b/lib/tds.ex @@ -16,7 +16,6 @@ defmodule Tds do Please consult with [configuration](readme.html#configuration) how to do this. """ alias Tds.Query - alias Tds.Types.UUID @timeout 5000 @execution_mode :prepare_execute @@ -202,22 +201,28 @@ defmodule Tds do Application.fetch_env!(:tds, :json_library) end + @deprecated "Use Ecto.UUID instead" @doc """ Generates a version 4 (random) UUID in the MS uniqueidentifier binary format. """ @spec generate_uuid :: <<_::128>> - def generate_uuid, do: UUID.bingenerate() + # credo:disable-for-next-line Credo.Check.Refactor.Apply + def generate_uuid, do: apply(Tds.Types.UUID, :bingenerate, []) + @deprecated "Use Ecto.UUID instead" @doc """ Decodes MS uniqueidentifier binary to its string representation. """ - def decode_uuid(uuid), do: UUID.load(uuid) + # credo:disable-for-next-line Credo.Check.Refactor.Apply + def decode_uuid(uuid), do: apply(Tds.Types.UUID, :load, [uuid]) + @deprecated "Use Ecto.UUID instead" @doc """ Same as `decode_uuid/1` but raises `ArgumentError` if value is invalid. """ def decode_uuid!(uuid) do - case UUID.load(uuid) do + # credo:disable-for-next-line Credo.Check.Refactor.Apply + case apply(Tds.Types.UUID, :load, [uuid]) do {:ok, value} -> value @@ -226,15 +231,19 @@ defmodule Tds do end end + @deprecated "Use Ecto.UUID instead" @doc """ Encodes UUID string into MS uniqueidentifier binary. """ @spec encode_uuid(any) :: :error | {:ok, <<_::128>>} - def encode_uuid(value), do: UUID.dump(value) + # credo:disable-for-next-line Credo.Check.Refactor.Apply + def encode_uuid(value), do: apply(Tds.Types.UUID, :dump, [value]) + @deprecated "Use Ecto.UUID instead" @doc """ Same as `encode_uuid/1` but raises `ArgumentError` if value is invalid. """ @spec encode_uuid!(any) :: <<_::128>> - def encode_uuid!(value), do: UUID.dump!(value) + # credo:disable-for-next-line Credo.Check.Refactor.Apply + def encode_uuid!(value), do: apply(Tds.Types.UUID, :dump!, [value]) end diff --git a/lib/tds/messages.ex b/lib/tds/messages.ex index 5294e80..a591f84 100644 --- a/lib/tds/messages.ex +++ b/lib/tds/messages.ex @@ -3,11 +3,12 @@ defmodule Tds.Messages do import Record, only: [defrecord: 2] import Tds.Tokens, only: [decode_tokens: 1] + import Tds.Protocol.Constants alias Tds.Encoding.UCS2 alias Tds.Parameter - alias Tds.Protocol.{Login7, Prelogin} - alias Tds.Types + alias Tds.Protocol.{Login7, Packet, Prelogin} + alias Tds.Type.Registry require Bitwise require Logger @@ -47,21 +48,7 @@ defmodule Tds.Messages do # @tds_sp_prepexecrpc 14 @tds_sp_unprepare 15 - ## Packet Size - @tds_pack_data_size 4088 - # @tds_pack_header_size 8 - # @tds_pack_size @tds_pack_header_size + @tds_pack_data_size - - ## Packet Types - # @tds_pack_sqlbatch 1 - # @tds_pack_rpcRequest 3 - @tds_pack_cancel 6 - # @tds_pack_bulkloadbcp 7 - # @tds_pack_transmgrreq 14 - # @tds_pack_normal 15 - # @tds_pack_login7 16 - # @tds_pack_sspimessage 17 - # @tds_pack_prelogin 18 + # Packet sizes and types are sourced from Tds.Protocol.Constants via import. ## Parsers def parse(:prelogin, packet_data, s) do @@ -254,7 +241,7 @@ defmodule Tds.Messages do end defp encode(msg_attn(), _s) do - encode_packets(@tds_pack_cancel, <<>>) + Packet.encode(packet_type(:attention), <<>>) end defp encode(msg_sql(query: q), %{trans: trans}) do @@ -277,7 +264,7 @@ defmodule Tds.Messages do total_length = byte_size(headers) + 4 all_headers = <> <> headers data = all_headers <> q_ucs - encode_packets(0x01, data) + Packet.encode(packet_type(:sql_batch), data) end defp encode(msg_rpc(proc: proc, params: params), %{trans: trans}) do @@ -299,7 +286,7 @@ defmodule Tds.Messages do data = all_headers <> encode_rpc(proc, params) # layout Data - encode_packets(0x03, data) + Packet.encode(packet_type(:rpc), data) end defp encode(msg_transmgr(command: "TM_BEGIN_XACT", isolation_level: isolation_level), %{ @@ -356,7 +343,7 @@ defmodule Tds.Messages do data = all_headers <> <> - encode_packets(0x0E, data) + Packet.encode(packet_type(:transaction_manager), data) end defp encode_rpc(:sp_executesql, params) do @@ -412,38 +399,57 @@ defmodule Tds.Messages do defp encode_rpc_param(%Tds.Parameter{name: name} = param) do p_name = UCS2.from_string(name) p_flags = param |> Parameter.option_flags() - {type_code, type_data, type_attr} = Types.encode_data_type(param) - p_meta_data = <> <> p_name <> p_flags <> type_data + {_type_code, meta_bin, value_bin} = + encode_via_handler(param) - p_meta_data <> Types.encode_data(type_code, param.value, type_attr) + IO.iodata_to_binary([ + <>, + p_name, + p_flags, + meta_bin, + value_bin + ]) end - def encode_header(type, data, id, status) do - length = byte_size(data) + 8 - # id::unsigned-size(8) below basicaly deals overflow e.g. rem(id, 255) - <> + @default_registry Registry.new() + + defp encode_via_handler(%Tds.Parameter{ + type: type, + value: value + }) + when not is_nil(type) do + handler = resolve_handler(@default_registry, type) + meta = handler_metadata(handler, value, type) + handler.encode(value, meta) end - @spec encode_packets(integer, binary, non_neg_integer) :: [binary, ...] - def encode_packets(type, binary, id \\ 1) + defp encode_via_handler(%Tds.Parameter{value: value}) do + {:ok, handler, meta} = + Registry.infer(@default_registry, value) - def encode_packets(_type, <<>>, _) do - [] + handler.encode(value, meta) end - def encode_packets( - type, - <>, - id - ) do - status = if byte_size(tail) > 0, do: 0, else: 1 - packet = [encode_header(type, data, id, status), data] - [packet | encode_packets(type, tail, id + 1)] + defp resolve_handler(registry, type) do + case Registry.handler_for_name(registry, type) do + {:ok, handler} -> + handler + + :error -> + # Unknown types (e.g. {:array, :string}) default + # to string encoding, matching legacy behavior. + {:ok, handler} = + Registry.handler_for_name(registry, :string) + + handler + end end - def encode_packets(type, data, id) do - header = encode_header(type, data, id, 1) - [header <> data] + defp handler_metadata(handler, value, type) do + case handler.infer(value) do + {:ok, meta} -> Map.put(meta, :type, type) + :skip -> %{type: type} + end end end diff --git a/lib/tds/parameter.ex b/lib/tds/parameter.ex index 63ce1b1..c72f97f 100644 --- a/lib/tds/parameter.ex +++ b/lib/tds/parameter.ex @@ -1,7 +1,7 @@ defmodule Tds.Parameter do @moduledoc false - alias Tds.Types + alias Tds.Type.Registry @type t :: %__MODULE__{ name: String.t() | nil, @@ -33,20 +33,20 @@ defmodule Tds.Parameter do <<0::size(6), fDefaultValue::size(1), fByRefValue::size(1)>> end - def prepared_params(params) do + def prepared_params(params, registry \\ nil) do + reg = registry || default_registry() + params |> List.wrap() |> name(0) |> Enum.map_join(", ", fn param -> - param - |> fix_data_type() - |> Types.encode_param_descriptor() + param_descriptor(param, reg) end) end @doc """ - Prepares parameters by giving them names, define missing type, encoding value - if necessary. + Prepares parameters by giving them names, define missing type, + encoding value if necessary. """ def prepare_params(params) do params @@ -64,9 +64,14 @@ defmodule Tds.Parameter do param = case param do - %__MODULE__{name: nil} -> fix_data_type(%{param | name: "@#{name}"}) - %__MODULE__{} -> fix_data_type(param) - raw_param -> fix_data_type(raw_param, name) + %__MODULE__{name: nil} -> + fix_data_type(%{param | name: "@#{name}"}) + + %__MODULE__{} -> + fix_data_type(param) + + raw_param -> + fix_data_type(raw_param, name) end do_name(tail, name, [param | acc]) @@ -82,8 +87,6 @@ defmodule Tds.Parameter do end def fix_data_type(%__MODULE__{type: nil, value: nil} = param) do - # should fix Ecto has_one, on_change :nullify issue where type is not known when Ecto - # builds query/statement for on_change callback %{param | type: :binary} end @@ -108,11 +111,6 @@ defmodule Tds.Parameter do def fix_data_type(%__MODULE__{value: value} = param) when is_integer(value) do - # if -2_147_483_648 >= value and value <= 2_147_483_647 do - # %{param | type: :integer} - # else - # %{param | type: :bigint} - # end %{param | type: :integer} end @@ -145,14 +143,15 @@ defmodule Tds.Parameter do %{param | type: :datetime} end - def fix_data_type(%__MODULE__{value: %NaiveDateTime{microsecond: {_, s}}} = param) do + def fix_data_type( + %__MODULE__{value: %NaiveDateTime{microsecond: {_, s}}} = + param + ) do type = if s > 3, do: :datetime2, else: :datetime %{param | type: type} end def fix_data_type(%__MODULE__{value: {{_, _, _}, {_, _, _, fsec}}} = param) do - # todo: enable warning and introduce Tds.Types.DateTime2 and Tds.Types.DateTime - # Logger.warn(fn -> "Datetime as tuple is obsolete, please use NaiveDateTime." end) type = if rem(fsec, 1000) > 0, do: :datetime2, else: :datetime %{param | type: type} end @@ -179,4 +178,54 @@ defmodule Tds.Parameter do def fix_data_type(raw_param, acc) do fix_data_type(%__MODULE__{name: "@#{acc}", value: raw_param}) end + + @doc """ + Generates a SQL parameter descriptor for a single parameter. + + Returns a string like `"@name int"` or `"@name nvarchar(2000)"`. + """ + def encode_param_descriptor(%__MODULE__{} = param) do + param_descriptor(param, default_registry()) + end + + defp param_descriptor( + %__MODULE__{name: name, type: type, value: value}, + registry + ) + when not is_nil(type) do + handler = resolve_handler(registry, type) + meta = infer_metadata(handler, value, type) + desc = handler.param_descriptor(value, meta) + "#{name} #{desc}" + end + + defp param_descriptor(%__MODULE__{} = param, registry) do + param + |> fix_data_type() + |> param_descriptor(registry) + end + + defp resolve_handler(registry, type) do + case Registry.handler_for_name(registry, type) do + {:ok, handler} -> + handler + + :error -> + {:ok, handler} = + Registry.handler_for_name(registry, :string) + + handler + end + end + + defp infer_metadata(handler, value, type) do + case handler.infer(value) do + {:ok, meta} -> Map.put(meta, :type, type) + :skip -> %{type: type} + end + end + + defp default_registry do + Registry.new() + end end diff --git a/lib/tds/protocol.ex b/lib/tds/protocol.ex index 2fc918e..2d0e56b 100644 --- a/lib/tds/protocol.ex +++ b/lib/tds/protocol.ex @@ -3,7 +3,9 @@ defmodule Tds.Protocol do Implements DBConnection behaviour for TDS protocol. """ alias Tds.{Parameter, Query} - import Tds.{BinaryUtils, Messages, Utils} + alias Tds.Protocol.Packet + alias Tds.Type.Registry + import Tds.{Messages, Utils} require Logger use DBConnection @@ -42,7 +44,8 @@ defmodule Tds.Protocol do result: nil | list(), query: nil | String.t(), transaction: transaction, - env: env + env: env, + registry: Registry.t() } defstruct sock: nil, @@ -59,7 +62,8 @@ defmodule Tds.Protocol do savepoint: 0, collation: %Tds.Protocol.Collation{}, packetsize: 4096 - } + }, + registry: nil @spec connect(opts :: Keyword.t()) :: {:ok, state :: t()} | {:error, Exception.t()} def connect(opts) do @@ -71,7 +75,8 @@ defmodule Tds.Protocol do |> Keyword.put_new(:hostname, System.get_env("MSSQLHOST") || "localhost") |> Enum.reject(fn {_k, v} -> is_nil(v) end) - s = %__MODULE__{} + registry = Registry.new(opts[:extra_types] || []) + s = %__MODULE__{registry: registry} case opts[:instance] do nil -> @@ -201,7 +206,7 @@ defmodule Tds.Protocol do :prepare_execute -> params = opts[:parameters] - |> Parameter.prepared_params() + |> Parameter.prepared_params(s.registry) send_prepare(statement, params, %{s | state: :prepare}) @@ -609,7 +614,7 @@ defmodule Tds.Protocol do defp send_param_query( %Query{handle: handle, statement: statement} = _, params, - %{transaction: :started} = s + %{transaction: :started, registry: registry} = s ) do msg = case handle do @@ -625,7 +630,7 @@ defmodule Tds.Protocol do name: "@params", type: :string, direction: :input, - value: Parameter.prepared_params(params) + value: Parameter.prepared_params(params, registry) } | Parameter.prepare_params(params) ] @@ -661,7 +666,7 @@ defmodule Tds.Protocol do defp send_param_query( %Query{handle: handle, statement: statement} = _, params, - s + %{registry: registry} = s ) do msg = case handle do @@ -677,7 +682,7 @@ defmodule Tds.Protocol do name: "@params", type: :string, direction: :input, - value: Parameter.prepared_params(params) + value: Parameter.prepared_params(params, registry) } | Parameter.prepare_params(params) ] @@ -818,14 +823,15 @@ defmodule Tds.Protocol do mod.send(sock, pak) end) - case msg_recv(s) do - {:disconnect, ex, s} -> - {:disconnect, ex, %{s | opts: clean_opts(opts)}} + case Packet.reassemble(s.sock) do + {:ok, _type, payload} -> + decode(payload, %{s | state: :login}) - buffer -> - buffer - |> IO.iodata_to_binary() - |> decode(%{s | state: :login}) + {:error, reason} -> + {:disconnect, + %Tds.Error{ + message: "Login failed: #{inspect(reason)}" + }, %{s | opts: clean_opts(opts)}} end end @@ -850,95 +856,15 @@ defmodule Tds.Protocol do end) with :ok <- send_result, - buffer when is_list(buffer) <- msg_recv(s) do - buffer - |> IO.iodata_to_binary() - |> decode(s) - end - end - - defp msg_recv(%{sock: {mod, pid}} = s) do - case mod.recv(pid, 0) do - {:ok, pkg} -> - pkg - |> next_tds_pkg([]) - |> msg_recv(s) - - {:error, error} -> + {:ok, _type, payload} <- Packet.reassemble(s.sock) do + decode(payload, s) + else + {:error, reason} -> {:disconnect, %Tds.Error{ - message: "Connection failed to receive packet due #{inspect(error)}" + message: "Failed to receive packet: #{inspect(reason)}" }, s} end - catch - {:error, error} -> {:disconnect, error, s} - end - - defp msg_recv({:done, buffer, _}, _s) do - Enum.reverse(buffer) - end - - defp msg_recv({:more, buffer, more, last?}, %{sock: {mod, pid}} = s) do - take = if last?, do: more, else: 0 - - case mod.recv(pid, take) do - {:ok, pkg} -> - next_tds_pkg(pkg, buffer, more, last?) - |> msg_recv(s) - - {:error, error} -> - throw({:error, error}) - end - end - - defp msg_recv({:more, buffer, unknown_pkg}, %{sock: {mod, pid}} = s) do - case mod.recv(pid, 0) do - {:ok, pkg} -> - unknown_pkg - |> Kernel.<>(pkg) - |> next_tds_pkg(buffer) - |> msg_recv(s) - - {:error, error} -> - throw({:error, error}) - end - end - - defp next_tds_pkg(pkg, buffer) do - case pkg do - <<0x04, 0x01, size::int16(), _::int32(), chunk::binary>> -> - more = size - 8 - next_tds_pkg(chunk, buffer, more, true) - - <<0x04, 0x00, size::int16(), _::int32(), chunk::binary>> -> - more = size - 8 - next_tds_pkg(chunk, buffer, more, false) - - unknown_pkg -> - {:more, buffer, unknown_pkg} - end - end - - defp next_tds_pkg(pkg, buffer, more, true) do - case pkg do - <> -> - {:done, [chunk | buffer], tail} - - <> -> - more = more - byte_size(chunk) - {:more, [chunk | buffer], more, true} - end - end - - defp next_tds_pkg(pkg, buffer, more, false) do - case pkg do - <> -> - next_tds_pkg(tail, [chunk | buffer]) - - <> -> - more = more - byte_size(chunk) - {:more, [chunk | buffer], more, false} - end end defp clean_opts(opts) do diff --git a/lib/tds/protocol/binary.ex b/lib/tds/protocol/binary.ex new file mode 100644 index 0000000..99b20d2 --- /dev/null +++ b/lib/tds/protocol/binary.ex @@ -0,0 +1,239 @@ +defmodule Tds.Protocol.Binary do + @moduledoc """ + Unified binary macros for TDS protocol encoding and decoding. + + Consolidates macros from `Tds.BinaryUtils` (little-endian, used by most + modules) and `Tds.Protocol.Grammar` (big-endian + parameterized, used by + prelogin and collation). + + ## Byte Order Convention + + Multi-byte integer macros default to **little-endian** (the standard for + TDS data fields) and accept an optional `:big` or `:little` atom argument: + + <> # little-endian (default) + <> # explicit little-endian + <> # big-endian (for prelogin headers) + + ## Parameterized Macros + + `bit/1`, `byte/1`, `uchar/1`, `unicodechar/1`, `bigbinary/1` accept + a size parameter and are used for structures like collation bitfields. + """ + + # =========================================================================== + # Unsigned integers — with optional endianness argument + # =========================================================================== + + @doc "An unsigned single byte (8-bit) value. Range: 0..255." + defmacro byte, do: quote(do: unsigned - 8) + + @doc """ + An unsigned 2-byte (16-bit) value. Range: 0..65535. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro ushort(endian \\ :little) + defmacro ushort(:little), do: quote(do: little - unsigned - 16) + defmacro ushort(:big), do: quote(do: unsigned - 16) + + @doc """ + An unsigned 4-byte (32-bit) value. Range: 0..(2^32)-1. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro ulong(endian \\ :little) + defmacro ulong(:little), do: quote(do: little - unsigned - 32) + defmacro ulong(:big), do: quote(do: unsigned - 32) + + @doc """ + An unsigned 4-byte (32-bit) value. Alias for `ulong`. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro dword(endian \\ :little) + defmacro dword(:little), do: quote(do: little - unsigned - 32) + defmacro dword(:big), do: quote(do: unsigned - 32) + + @doc """ + An unsigned 8-byte (64-bit) value. Range: 0..(2^64)-1. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro ulonglong(endian \\ :little) + defmacro ulonglong(:little), do: quote(do: little - unsigned - 64) + defmacro ulonglong(:big), do: quote(do: unsigned - 64) + + @doc "An unsigned single byte (8-bit) value representing a character." + defmacro uchar, do: quote(do: unsigned - 8) + + # =========================================================================== + # Signed integers — with optional endianness argument + # =========================================================================== + + @doc """ + A signed 4-byte (32-bit) value. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro long(endian \\ :little) + defmacro long(:little), do: quote(do: little - signed - 32) + defmacro long(:big), do: quote(do: signed - 32) + + @doc """ + A signed 8-byte (64-bit) value. + + Defaults to little-endian. Pass `:big` for big-endian. + """ + defmacro longlong(endian \\ :little) + defmacro longlong(:little), do: quote(do: little - signed - 64) + defmacro longlong(:big), do: quote(do: signed - 64) + + @doc "A signed 8-bit integer." + defmacro int8, do: quote(do: signed - 8) + + @doc "A signed 16-bit little-endian integer." + defmacro int16, do: quote(do: little - signed - 16) + + @doc "A signed 32-bit little-endian integer." + defmacro int32, do: quote(do: little - signed - 32) + + @doc "A signed 64-bit little-endian integer." + defmacro int64, do: quote(do: little - signed - 64) + + # =========================================================================== + # Unsigned integer aliases (from BinaryUtils) + # =========================================================================== + + @doc "An unsigned 8-bit integer." + defmacro uint8, do: quote(do: unsigned - 8) + + @doc "An unsigned 16-bit little-endian integer." + defmacro uint16, do: quote(do: little - unsigned - 16) + + @doc "An unsigned 32-bit little-endian integer." + defmacro uint32, do: quote(do: little - unsigned - 32) + + @doc "An unsigned 64-bit little-endian integer." + defmacro uint64, do: quote(do: little - unsigned - 64) + + # =========================================================================== + # Floats — little-endian (from BinaryUtils) + # =========================================================================== + + @doc "A 32-bit little-endian float." + defmacro float32, do: quote(do: little - signed - float - 32) + + @doc "A 64-bit little-endian float." + defmacro float64, do: quote(do: little - signed - float - 64) + + # =========================================================================== + # Length prefixes (from BinaryUtils) + # =========================================================================== + + @doc "Unsigned 8-bit length prefix." + defmacro bytelen, do: quote(do: unsigned - 8) + + @doc "Unsigned 16-bit little-endian length prefix." + defmacro ushortlen, do: quote(do: little - unsigned - 16) + + @doc "Unsigned 16-bit little-endian char/binary length prefix." + defmacro ushortcharbinlen, do: quote(do: little - unsigned - 16) + + @doc "Signed 32-bit little-endian length prefix." + defmacro longlen, do: quote(do: little - signed - 32) + + @doc "Unsigned 64-bit little-endian length prefix." + defmacro ulonglonglen, do: quote(do: little - unsigned - 64) + + # =========================================================================== + # Type metadata (from BinaryUtils) + # =========================================================================== + + @doc "Unsigned 8-bit precision value." + defmacro precision, do: quote(do: unsigned - 8) + + @doc "Unsigned 8-bit scale value." + defmacro scale, do: quote(do: unsigned - 8) + + # =========================================================================== + # Null markers (from BinaryUtils) + # =========================================================================== + + @doc "A single byte (8-bit) NULL value." + defmacro gen_null, do: quote(do: size(8)) + + @doc "A 2-byte (16-bit) NULL value for char/binary data." + defmacro charbin_null16, do: quote(do: size(16)) + + @doc "A 4-byte (32-bit) NULL value for char/binary data." + defmacro charbin_null32, do: quote(do: size(32)) + + # =========================================================================== + # Reserved fields (from BinaryUtils — include literal zero values) + # =========================================================================== + + @doc "A single reserved bit, set to 0." + defmacro freservedbit, do: quote(do: 0x0 :: size(1)) + + @doc "A single reserved byte, set to 0x00." + defmacro freservedbyte, do: quote(do: 0x00 :: size(8)) + + # =========================================================================== + # Fixed-width special (from BinaryUtils) + # =========================================================================== + + @doc "An unsigned 6-byte (48-bit) value." + defmacro sixbyte, do: quote(do: unsigned - 48) + + @doc "A single bit value of either 0 or 1." + defmacro bit, do: quote(do: size(1)) + + # =========================================================================== + # Parameterized binary/unicode (from BinaryUtils) + # =========================================================================== + + @doc "A binary of `size` bytes." + defmacro binary(size), do: quote(do: binary - size(unquote(size))) + + @doc "A binary of `size * unit` bits." + defmacro binary(size, unit), + do: quote(do: binary - size(unquote(size)) - unit(unquote(unit))) + + @doc "A little-endian UCS-2 binary of `size` 16-bit code units." + defmacro unicode(size), + do: quote(do: binary - little - size(unquote(size)) - unit(16)) + + # =========================================================================== + # Parameterized macros (from Grammar, for collation and structured fields) + # =========================================================================== + + @doc "A field of `n` consecutive 1-bit units." + defmacro bit(n), do: quote(do: size(1) - unit(unquote(n))) + + @doc "An unsigned field of `n` bytes." + defmacro byte(n), do: quote(do: unsigned - size(unquote(n)) - unit(8)) + + @doc "An unsigned field of `n` bytes (character variant)." + defmacro uchar(n), do: quote(do: unsigned - size(unquote(n)) - unit(8)) + + @doc "A field of `n` UCS-2 (16-bit) character units." + defmacro unicodechar(n), do: quote(do: size(unquote(n)) - unit(16)) + + @doc "A binary field of `n` bytes." + defmacro bigbinary(n), do: quote(do: binary - size(unquote(n)) - unit(8)) + + @doc "A reserved bit field of `n` 1-bit units for padding." + defmacro freservedbit(n), do: quote(do: size(1) - unit(unquote(n))) + + @doc "A reserved byte field of `n` bytes for padding." + defmacro freservedbyte(n), do: quote(do: size(unquote(n)) - unit(8)) + + @doc """ + A 2-byte or 4-byte NULL marker for char/binary data. + + `n` must be 2 or 4. + """ + defmacro charbin_null(n) when n in [2, 4], + do: quote(do: size(unquote(n)) - unit(8)) +end diff --git a/lib/tds/protocol/constants.ex b/lib/tds/protocol/constants.ex new file mode 100644 index 0000000..4ab7c3f --- /dev/null +++ b/lib/tds/protocol/constants.ex @@ -0,0 +1,350 @@ +defmodule Tds.Protocol.Constants do + @moduledoc """ + All TDS protocol constants (or type tokens if you like). + + Provides macros that expand to integer literals at compile time, + making them usable in binary pattern matching and guard clauses. + + ## Usage + + require Tds.Protocol.Constants + alias Tds.Protocol.Constants + + # In a function head / binary match: + def decode(<>), do: ... + + # As a plain value: + type = Constants.packet_type(:login7) + """ + + # --------------------------------------------------------------------------- + # Packet Types + # --------------------------------------------------------------------------- + + @packet_types %{ + sql_batch: 0x01, + rpc: 0x03, + tabular_result: 0x04, + attention: 0x06, + bulk: 0x07, + fedauth_token: 0x08, + transaction_manager: 0x0E, + login7: 0x10, + sspi: 0x11, + prelogin: 0x12 + } + + @doc "Returns the numeric packet type code for the given atom." + defmacro packet_type(name) do + Map.fetch!(@packet_types, name) + end + + # --------------------------------------------------------------------------- + # Packet Sizes + # --------------------------------------------------------------------------- + + @packet_sizes %{ + header_size: 8, + max_data_size: 4088, + max_packet_size: 4096 + } + + @doc "Returns the packet size constant for the given atom." + defmacro packet_size(name) do + Map.fetch!(@packet_sizes, name) + end + + # --------------------------------------------------------------------------- + # TDS Data Type Codes + # --------------------------------------------------------------------------- + + # Fixed-length data types (zero-length null is included here) + @fixed_types %{ + null: 0x1F, + tinyint: 0x30, + bit: 0x32, + smallint: 0x34, + int: 0x38, + smalldatetime: 0x3A, + real: 0x3B, + money: 0x3C, + datetime: 0x3D, + float: 0x3E, + smallmoney: 0x7A, + bigint: 0x7F + } + + # Variable-length data types + @variable_types %{ + uniqueidentifier: 0x24, + intn: 0x26, + # Legacy types + decimal: 0x37, + numeric: 0x3F, + bitn: 0x68, + decimaln: 0x6A, + numericn: 0x6C, + floatn: 0x6D, + moneyn: 0x6E, + datetimen: 0x6F, + daten: 0x28, + timen: 0x29, + datetime2n: 0x2A, + datetimeoffsetn: 0x2B, + # Legacy short types + char: 0x2F, + varchar: 0x27, + binary: 0x2D, + varbinary: 0x25, + # Big types (used for actual protocol encoding) + bigvarbinary: 0xA5, + bigvarchar: 0xA7, + bigbinary: 0xAD, + bigchar: 0xAF, + nvarchar: 0xE7, + nchar: 0xEF, + xml: 0xF1, + udt: 0xF0, + json: 0xF4, + vector: 0xF5, + text: 0x23, + image: 0x22, + ntext: 0x63, + variant: 0x62 + } + + @all_types Map.merge(@fixed_types, @variable_types) + + @doc "Returns the numeric TDS data type code for the given atom." + defmacro tds_type(name) do + Map.fetch!(@all_types, name) + end + + # Fixed data types mapped by code -> byte length + @fixed_data_types_map %{ + 0x1F => 0, + 0x30 => 1, + 0x32 => 1, + 0x34 => 2, + 0x38 => 4, + 0x3A => 4, + 0x3B => 4, + 0x3C => 8, + 0x3D => 8, + 0x3E => 8, + 0x7A => 4, + 0x7F => 8 + } + + @doc "Returns a map of fixed type code => byte length." + @spec fixed_data_types() :: %{non_neg_integer() => non_neg_integer()} + def fixed_data_types, do: @fixed_data_types_map + + @doc "Returns true if the given type code is a fixed-length data type." + @spec is_fixed_type?(non_neg_integer()) :: boolean() + def is_fixed_type?(code), do: Map.has_key?(@fixed_data_types_map, code) + + @doc "Returns the byte length for a fixed type code, or nil if not a fixed type." + @spec fixed_type_length(non_neg_integer()) :: non_neg_integer() | nil + def fixed_type_length(code), do: Map.get(@fixed_data_types_map, code) + + # --------------------------------------------------------------------------- + # Token Codes + # --------------------------------------------------------------------------- + + @tokens %{ + offset: 0x78, + returnstatus: 0x79, + colmetadata: 0x81, + altmetadata: 0x88, + dataclassification: 0xA3, + tabname: 0xA4, + colinfo: 0xA5, + order: 0xA9, + error: 0xAA, + info: 0xAB, + returnvalue: 0xAC, + loginack: 0xAD, + featureextack: 0xAE, + row: 0xD1, + nbcrow: 0xD2, + altrow: 0xD3, + envchange: 0xE3, + sessionstate: 0xE4, + sspi: 0xED, + fedauthinfo: 0xEE, + done: 0xFD, + doneproc: 0xFE, + doneinproc: 0xFF + } + + @doc "Returns the numeric token code for the given atom." + defmacro token(name) do + Map.fetch!(@tokens, name) + end + + # --------------------------------------------------------------------------- + # Encryption Flags + # --------------------------------------------------------------------------- + + @encryption_flags %{ + off: 0x00, + on: 0x01, + not_supported: 0x02, + required: 0x03 + } + + @doc "Returns the numeric encryption flag for the given atom." + defmacro encryption(name) do + Map.fetch!(@encryption_flags, name) + end + + # --------------------------------------------------------------------------- + # Prelogin Token Types + # --------------------------------------------------------------------------- + + @prelogin_token_types %{ + version: 0x00, + encryption: 0x01, + instopt: 0x02, + thread_id: 0x03, + mars: 0x04, + trace_id: 0x05, + fed_auth_required: 0x06, + nonce_opt: 0x07, + terminator: 0xFF + } + + @doc "Returns the numeric prelogin token type for the given atom." + defmacro prelogin_token_type(name) do + Map.fetch!(@prelogin_token_types, name) + end + + # --------------------------------------------------------------------------- + # Time Scale to Byte Length + # --------------------------------------------------------------------------- + + @time_scale_lengths %{ + 0 => 3, + 1 => 3, + 2 => 3, + 3 => 4, + 4 => 4, + 5 => 5, + 6 => 5, + 7 => 5 + } + + @doc "Returns the byte length needed to store a time value at the given scale (0..7)." + @spec time_byte_length(0..7) :: 3 | 4 | 5 + def time_byte_length(scale) when scale in 0..7 do + Map.fetch!(@time_scale_lengths, scale) + end + + # --------------------------------------------------------------------------- + # PLP (Partially Length-Prefixed) Constants + # --------------------------------------------------------------------------- + + @plp_constants %{ + null: 0xFFFFFFFFFFFFFFFF, + unknown_length: 0xFFFFFFFFFFFFFFFE, + marker_length: 0xFFFF, + max_short_data_size: 8000 + } + + @doc "Returns the PLP constant for the given atom." + defmacro plp(name) do + Map.fetch!(@plp_constants, name) + end + + # --------------------------------------------------------------------------- + # Environment Change Types + # --------------------------------------------------------------------------- + + @envchange_types %{ + database: 0x01, + language: 0x02, + charset: 0x03, + packet_size: 0x04, + unicode_data_sorting_local_id: 0x05, + unicode_data_sorting_comparison_flags: 0x06, + sql_collation: 0x07, + begin_transaction: 0x08, + commit_transaction: 0x09, + rollback_transaction: 0x0A, + enlist_dtc_transaction: 0x0B, + defect_transaction: 0x0C, + real_time_log_shipping: 0x0D, + promote_transaction: 0x0F, + transaction_manager_address: 0x10, + transaction_ended: 0x11, + reset_completion_acknowledgement: 0x12, + user_instance_started: 0x13, + routing_info: 0x14 + } + + @doc "Returns the numeric environment change type for the given atom." + defmacro envchange_type(name) do + Map.fetch!(@envchange_types, name) + end + + # --------------------------------------------------------------------------- + # Isolation Levels + # --------------------------------------------------------------------------- + + @isolation_levels %{ + read_uncommitted: 0x01, + read_committed: 0x02, + repeatable_read: 0x03, + snapshot: 0x04, + serializable: 0x05 + } + + @doc "Returns the numeric isolation level for the given atom." + defmacro isolation_level(name) do + Map.fetch!(@isolation_levels, name) + end + + # --------------------------------------------------------------------------- + # TDS Protocol Versions + # --------------------------------------------------------------------------- + + @tds_versions %{ + tds_7_0: 0x70000000, + tds_7_1: 0x71000001, + tds_7_2: 0x72090002, + tds_7_3a: 0x730A0003, + tds_7_3b: 0x730B0003, + tds_7_4: 0x74000004 + } + + @doc "Returns the 4-byte TDS version code for the given atom." + defmacro tds_version(name) do + Map.fetch!(@tds_versions, name) + end + + # --------------------------------------------------------------------------- + # Login7 Feature Extension IDs + # --------------------------------------------------------------------------- + + @feature_ids %{ + sessionrecovery: 0x01, + fedauth: 0x02, + columnencryption: 0x04, + globaltransactions: 0x05, + azuresqlsupport: 0x08, + dataclassification: 0x09, + utf8_support: 0x0A, + azuresqldnscaching: 0x0B, + jsonsupport: 0x0D, + vectorsupport: 0x0E, + enhancedroutingsupport: 0x0F, + useragent: 0x10, + terminator: 0xFF + } + + @doc "Returns the numeric feature extension ID for the given atom." + defmacro feature_id(name) do + Map.fetch!(@feature_ids, name) + end +end diff --git a/lib/tds/protocol/login7.ex b/lib/tds/protocol/login7.ex index c5dd5f5..a940d11 100644 --- a/lib/tds/protocol/login7.ex +++ b/lib/tds/protocol/login7.ex @@ -5,13 +5,10 @@ defmodule Tds.Protocol.Login7 do See: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/773a62b6-ee89-4c02-9e5e-344882630aac """ alias Tds.Encoding.UCS2 - import Tds.BinaryUtils + alias Tds.Protocol.Packet + import Tds.Protocol.Binary + import Tds.Protocol.Constants - @packet_header 0x10 - ## Packet Size - @tds_pack_header_size 8 - @tds_pack_data_size 4088 - @tds_pack_size @tds_pack_header_size + @tds_pack_data_size @max_supported_tds_version <<0x04, 0x00, 0x00, 0x74>> @default_client_version <<0x04, 0x00, 0x00, 0x07>> @client_pid <<0x00, 0x10, 0x00, 0x00>> @@ -67,7 +64,7 @@ defmodule Tds.Protocol.Login7 do %__MODULE__{ tds_version: @max_supported_tds_version, - packet_size: <<@tds_pack_size::little-size(4)-unit(8)>>, + packet_size: <>, hostname: to_string(hostname), app_name: Keyword.get(opts, :app_name, @default_app_name), client_version: @default_client_version, @@ -95,7 +92,7 @@ defmodule Tds.Protocol.Login7 do login7_len = byte_size(login7) + 4 data = <> <> login7 - Tds.Messages.encode_packets(@packet_header, data) + Packet.encode(packet_type(:login7), data) end defp fixed_login(login) do diff --git a/lib/tds/protocol/packet.ex b/lib/tds/protocol/packet.ex new file mode 100644 index 0000000..6efa2c7 --- /dev/null +++ b/lib/tds/protocol/packet.ex @@ -0,0 +1,312 @@ +defmodule Tds.Protocol.Packet do + @moduledoc """ + TDS packet framing: encode payloads into TDS packets and + decode packet headers. + + TDS packets are at most 4096 bytes: an 8-byte header followed + by up to 4088 bytes of data. Messages larger than 4088 bytes + are split across multiple packets with incrementing packet IDs. + """ + + import Tds.Protocol.Constants + + @status_more 0x00 + @status_eom 0x01 + + @type header :: %{ + type: byte(), + status: byte(), + length: pos_integer(), + spid: non_neg_integer(), + packet_id: byte(), + window: byte() + } + + @type sock :: + {module(), :gen_tcp.socket() | :ssl.sslsocket()} + + @default_max_payload_size 200 * 1024 * 1024 + + @doc """ + Encode a payload into one or more TDS packets. + + Returns a list of iodata, one entry per packet. Each entry + contains an 8-byte TDS header followed by up to 4088 bytes + of payload data. + + Packet IDs start at 1 and wrap at 256 per the TDS spec. + """ + @spec encode(byte(), binary()) :: [iodata()] + def encode(type, payload) when is_binary(payload) do + do_encode(type, payload, 1) + end + + defp do_encode(_type, <<>>, _id), do: [] + + defp do_encode( + type, + <>, + id + ) do + status = + if byte_size(rest) > 0, do: @status_more, else: @status_eom + + [ + build_packet(type, chunk, id, status) + | do_encode(type, rest, rem(id + 1, 256)) + ] + end + + defp do_encode(type, chunk, id) when is_binary(chunk) do + [build_packet(type, chunk, id, @status_eom)] + end + + defp build_packet(type, data, id, status) do + length = byte_size(data) + packet_size(:header_size) + [<>, data] + end + + @doc """ + Parse an 8-byte TDS packet header from a binary. + + Returns `{:ok, header, rest}` where `rest` is the remaining + bytes after the header, or `{:error, :incomplete_header}` if + the binary is shorter than 8 bytes. + """ + @spec decode_header(binary()) :: + {:ok, header(), binary()} | {:error, :incomplete_header} + def decode_header(<< + type, + status, + length::16-big, + spid::16-little, + packet_id, + window, + rest::binary + >>) do + header = %{ + type: type, + status: status, + length: length, + spid: spid, + packet_id: packet_id, + window: window + } + + {:ok, header, rest} + end + + def decode_header(_), do: {:error, :incomplete_header} + + # --------------------------------------------------------------------------- + # Reassembly + # --------------------------------------------------------------------------- + + @doc """ + Read and reassemble a complete TDS message from the socket. + + Reads one or more TDS packets, validates packet ID ordering, + strips headers, and concatenates the data payloads. + + Returns `{:ok, type, payload}` on success. + + ## Options + + * `:max_payload_size` - maximum allowed payload in bytes + (default: 200 MB) + """ + @spec reassemble(sock(), keyword()) :: + {:ok, byte(), binary()} | {:error, term()} + def reassemble(sock, opts \\ []) do + max = + Keyword.get( + opts, + :max_payload_size, + @default_max_payload_size + ) + + do_reassemble(sock, <<>>, nil, [], 0, nil, max) + end + + defp do_reassemble( + {mod, port} = sock, + pending, + pkt_type, + buf, + total, + expected_id, + max + ) do + case mod.recv(port, 0) do + {:ok, data} -> + process_packets( + sock, + pending <> data, + pkt_type, + buf, + total, + expected_id, + max + ) + + {:error, reason} -> + {:error, {:recv_failed, reason}} + end + end + + defp process_packets( + sock, + data, + pkt_type, + buf, + total, + expected_id, + max + ) do + case decode_header(data) do + {:error, :incomplete_header} -> + do_reassemble( + sock, + data, + pkt_type, + buf, + total, + expected_id, + max + ) + + {:ok, header, rest} -> + case validate_packet_id(expected_id, header.packet_id) do + :ok -> + type = pkt_type || header.type + data_len = header.length - packet_size(:header_size) + + extract_and_continue( + sock, + rest, + type, + buf, + total, + header, + data_len, + max + ) + + {:error, _} = err -> + err + end + end + end + + defp extract_and_continue( + sock, + rest, + pkt_type, + buf, + total, + header, + data_len, + max + ) do + case collect_chunk(sock, rest, data_len) do + {:ok, chunk, tail} -> + new_total = total + byte_size(chunk) + + if new_total > max do + {:error, {:payload_too_large, new_total, max}} + else + next_id = rem(header.packet_id + 1, 256) + + finish_or_continue( + sock, + tail, + pkt_type, + [chunk | buf], + new_total, + header.status, + next_id, + max + ) + end + + {:error, _} = err -> + err + end + end + + defp collect_chunk(sock, available, needed) do + available_len = byte_size(available) + + if available_len >= needed do + <> = available + {:ok, chunk, tail} + else + {mod, port} = sock + remaining = needed - available_len + + case mod.recv(port, remaining) do + {:ok, more} -> + combined = available <> more + <> = combined + {:ok, chunk, tail} + + {:error, reason} -> + {:error, {:recv_failed, reason}} + end + end + end + + defp finish_or_continue( + _sock, + _tail, + pkt_type, + buf, + _total, + @status_eom, + _next_id, + _max + ) do + payload = buf |> Enum.reverse() |> IO.iodata_to_binary() + {:ok, pkt_type, payload} + end + + defp finish_or_continue( + sock, + tail, + pkt_type, + buf, + total, + @status_more, + next_id, + max + ) do + if byte_size(tail) > 0 do + process_packets( + sock, + tail, + pkt_type, + buf, + total, + next_id, + max + ) + else + do_reassemble( + sock, + <<>>, + pkt_type, + buf, + total, + next_id, + max + ) + end + end + + defp validate_packet_id(nil, _actual), do: :ok + defp validate_packet_id(expected, expected), do: :ok + + defp validate_packet_id(expected, actual) do + {:error, {:out_of_order, expected: expected, got: actual}} + end +end diff --git a/lib/tds/protocol/prelogin.ex b/lib/tds/protocol/prelogin.ex index b9ea692..a09748d 100644 --- a/lib/tds/protocol/prelogin.ex +++ b/lib/tds/protocol/prelogin.ex @@ -4,7 +4,9 @@ defmodule Tds.Protocol.Prelogin do See: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/60f56408-0188-4cd5-8b90-25c6f2423868 """ - import Tds.Protocol.Grammar + alias Tds.Protocol.Packet + import Tds.Protocol.Binary + import Tds.Protocol.Constants require Logger @type state :: Tds.Protocol.t() @@ -30,24 +32,8 @@ defmodule Tds.Protocol.Prelogin do mars: boolean() } - @packet_header 0x12 - - # PL Options Tokens - @version_token 0x00 - @encryption_token 0x01 - @instopt_token 0x02 - @thread_id_token 0x03 - @mars_token 0x04 - # @trace_id_token 0x05 - @fed_auth_required_token 0x06 - @nonce_opt_token 0x07 - @terminator_token 0xFF - - # Encryption flags - @encryption_off 0x00 - @encryption_on 0x01 - @encryption_not_supported 0x02 - @encryption_required 0x03 + # Packet type, prelogin token types, and encryption flags are now + # sourced from Tds.Protocol.Constants via import above. @version Mix.Project.config()[:version] |> String.split(".") @@ -56,7 +42,7 @@ defmodule Tds.Protocol.Prelogin do @spec encode(maybe_improper_list()) :: [binary(), ...] def encode(opts) do stream = [ - {@version_token, get_version()}, + {prelogin_token_type(:version), get_version()}, encode_encryption(opts), # when instance id check is sent, encryption is not negotiated # encode_instance(opts), @@ -69,13 +55,13 @@ defmodule Tds.Protocol.Prelogin do {iodata, _} = stream - |> Enum.reduce({[[], @terminator_token, []], start_offset}, fn + |> Enum.reduce({[[], prelogin_token_type(:terminator), []], start_offset}, fn {token, option_data}, {[options, term, data], offset} -> data_length = byte_size(option_data) options = [ options, - <> + <> ] data = [data, option_data] @@ -83,14 +69,14 @@ defmodule Tds.Protocol.Prelogin do end) data = IO.iodata_to_binary(iodata) - Tds.Messages.encode_packets(@packet_header, data) + Packet.encode(packet_type(:prelogin), data) end defp get_version do @version |> case do [major, minor, build] -> - <> + <> [major, minor] -> <<0x00, 0x00, minor, major, 0x00, 0x00>> @@ -106,13 +92,13 @@ defmodule Tds.Protocol.Prelogin do data = case ssl?(opts) do :on -> - <<@encryption_on::byte()>> + <> :not_supported -> - <<@encryption_not_supported::byte()>> + <> :required -> - <<@encryption_required::byte()>> + <> :off -> # TODO: Support ssl: :off @@ -120,13 +106,13 @@ defmodule Tds.Protocol.Prelogin do # the other packages are send unencrypted over the wire. raise ArgumentError, ~s("ssl: :off" is currently not supported) - # <<@encryption_off::byte>> + # <> value -> raise ArgumentError, "invalid value for :ssl: #{inspect(value)}" end - {@encryption_token, data} + {prelogin_token_type(:encryption), data} end # defp encode_instance(opts) do @@ -149,15 +135,15 @@ defmodule Tds.Protocol.Prelogin do |> Integer.parse() |> elem(0) - {@thread_id_token, <>} + {prelogin_token_type(:thread_id), <>} end defp encode_mars(_opts) do - {@mars_token, <<0x00>>} + {prelogin_token_type(:mars), <<0x00>>} end defp encode_fed_auth_required(_opts) do - {@fed_auth_required_token, <<0x01>>} + {prelogin_token_type(:fed_auth_required), <<0x01>>} end # DECODE @@ -177,23 +163,23 @@ defmodule Tds.Protocol.Prelogin do disconnect(msg, s) # Encryption is off. Allowed server response is :off or :not_supported - {:off, enc, _} when enc in [<<@encryption_off>>, <<@encryption_not_supported>>] -> + {:off, enc, _} when enc in [<>, <>] -> {:login, s} # TODO: Encryption is off but server has encryption on. Should upgrade. - {:off, <<@encryption_required>>, _} -> + {:off, <>, _} -> disconnect("Server does not allow the requested encryption level.", s) # Encryption is not supported. The server needs to respond with :not_supported - {:not_supported, <<@encryption_not_supported>>, _} -> + {:not_supported, <>, _} -> {:login, s} # Encryption is on. The server needs to respond with :on - {:on, <<@encryption_on>>, _} -> + {:on, <>, _} -> {:encrypt, s} # Encryption is required. The server needs to respond with :on - {:required, <<@encryption_on>>, _} -> + {:required, <>, _} -> {:encrypt, s} {_, _, _} -> @@ -202,7 +188,8 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@version_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -211,7 +198,8 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@encryption_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -220,16 +208,18 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@instopt_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do - tokens = [{:encryption, offset, length} | tokens] + tokens = [{:instance, offset, length} | tokens] decode_tokens(tail, tokens, s) end defp decode_tokens( - <<@thread_id_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -238,7 +228,7 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@mars_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -247,7 +237,8 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@fed_auth_required_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -256,7 +247,8 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@nonce_opt_token, offset::ushort(), length::ushort(), tail::binary>>, + <>, tokens, s ) do @@ -265,7 +257,7 @@ defmodule Tds.Protocol.Prelogin do end defp decode_tokens( - <<@terminator_token, tail::binary>>, + <>, tokens, _s ) do diff --git a/lib/tds/query.ex b/lib/tds/query.ex index 76d3362..02a4293 100644 --- a/lib/tds/query.ex +++ b/lib/tds/query.ex @@ -13,7 +13,6 @@ end defimpl DBConnection.Query, for: Tds.Query do alias Tds.Parameter - alias Tds.Types alias Tds.Query alias Tds.Result @@ -26,7 +25,7 @@ defimpl DBConnection.Query, for: Tds.Query do end def encode(%Query{statement: statement}, params, _) do - param_desc = Enum.map_join(params, ", ", &Types.encode_param_descriptor/1) + param_desc = Enum.map_join(params, ", ", &Parameter.encode_param_descriptor/1) [ %Parameter{value: statement, type: :string}, diff --git a/lib/tds/tokens.ex b/lib/tds/tokens.ex index 31e26e2..285819d 100644 --- a/lib/tds/tokens.ex +++ b/lib/tds/tokens.ex @@ -1,13 +1,16 @@ defmodule Tds.Tokens do @moduledoc false - import Tds.BinaryUtils + import Tds.Protocol.Binary + import Tds.Protocol.Constants import Bitwise require Logger alias Tds.Encoding.UCS2 - alias Tds.Types + alias Tds.Type.{DataReader, Registry} + + @registry Registry.new() def retval_typ_size(38) do # 0x26 - SYBINTN - 1 @@ -43,30 +46,53 @@ defmodule Tds.Tokens do [] end - def decode_tokens(<>, collmetadata) do + def decode_tokens( + <>, + collmetadata + ) do {token_data, tail, collmetadata} = case token do - 0x81 -> decode_colmetadata(tail, collmetadata) - # 0xA5 -> decode_colinfo(tail, collmetadata) - 0xFD -> decode_done(tail, collmetadata) - 0xFE -> decode_doneproc(tail, collmetadata) - 0xFF -> decode_doneinproc(tail, collmetadata) - 0xE3 -> decode_envchange(tail, collmetadata) - 0xAA -> decode_error(tail, collmetadata) - # 0xAE -> decode_featureextack(tail, collmetadata) - # 0xEE -> decode_fedauthinfo(tail, collmetadata) - 0xAB -> decode_info(tail, collmetadata) - 0xAD -> decode_loginack(tail, collmetadata) - 0xD2 -> decode_nbcrow(tail, collmetadata) - # 0x78 -> decode_offset(tail, collmetadata) - 0xA9 -> decode_order(tail, collmetadata) - 0x79 -> decode_returnstatus(tail, collmetadata) - 0xAC -> decode_returnvalue(tail, collmetadata) - 0xD1 -> decode_row(tail, collmetadata) - # 0xE4 -> decode_sessionstate(tail, collmetadata) - # 0xED -> decode_sspi(tail, collmetadata) - # 0xA4 -> decode_tablename(tail, collmetadata) - t -> raise_unsupported_token(t, collmetadata) + token(:colmetadata) -> + decode_colmetadata(tail, collmetadata) + + token(:done) -> + decode_done(tail, collmetadata) + + token(:doneproc) -> + decode_doneproc(tail, collmetadata) + + token(:doneinproc) -> + decode_doneinproc(tail, collmetadata) + + token(:envchange) -> + decode_envchange(tail, collmetadata) + + token(:error) -> + decode_error(tail, collmetadata) + + token(:info) -> + decode_info(tail, collmetadata) + + token(:loginack) -> + decode_loginack(tail, collmetadata) + + token(:nbcrow) -> + decode_nbcrow(tail, collmetadata) + + token(:order) -> + decode_order(tail, collmetadata) + + token(:returnstatus) -> + decode_returnstatus(tail, collmetadata) + + token(:returnvalue) -> + decode_returnvalue(tail, collmetadata) + + token(:row) -> + decode_row(tail, collmetadata) + + t -> + raise_unsupported_token(t, collmetadata) end [token_data | decode_tokens(tail, collmetadata)] @@ -74,7 +100,8 @@ defmodule Tds.Tokens do defp raise_unsupported_token(token, _) do raise RuntimeError, - "Unsupported Token code #{inspect(token, base: :hex)} in Token Stream" + "Unsupported Token code " <> + "#{inspect(token, base: :hex)} in Token Stream" end defp decode_returnvalue(bin, collmetadata) do @@ -89,8 +116,8 @@ defmodule Tds.Tokens do >> = bin name = UCS2.to_string(name) - {type_info, tail} = Tds.Types.decode_info(data) - {value, tail} = Tds.Types.decode_data(type_info, tail) + {meta, tail} = decode_type_metadata(data) + {value, tail} = decode_type_value(meta, tail) param = %Tds.Parameter{name: name, value: value, direction: :output} {{:returnvalue, param}, tail, collmetadata} end @@ -112,7 +139,10 @@ defmodule Tds.Tokens do end # ORDER - defp decode_order(<>, collmetadata) do + defp decode_order( + <>, + collmetadata + ) do length = trunc(length / 2) {columns, tail} = decode_column_order(tail, length) {{:order, columns}, tail, collmetadata} @@ -146,8 +176,6 @@ defmodule Tds.Tokens do line_number: line_number } - # TODO Need to concat errors for delivery - # Logger.debug "SQL Error: #{inspect e}" {{:error, e}, tail, collmetadata} end @@ -190,7 +218,6 @@ defmodule Tds.Tokens do |> IO.iodata_to_binary() end) - # tokens = Keyword.update(tokens, :info, [i], &[i | &1]) {{:info, info}, tail, collmetadata} end @@ -290,12 +317,6 @@ defmodule Tds.Tokens do {{:packetsize, new_packetsize, old_packetsize}, rest} - # 0x05 - # @tds_envtype_unicode_data_storing_local_id -> - - # 0x06 - # @tds_envtype_uncode_data_string_comparison_flag -> - 0x07 -> << new_value_size::unsigned-8, @@ -341,9 +362,6 @@ defmodule Tds.Tokens do trans = :binary.copy(old_value) {{:transaction_rollback, <<0x00>>, trans}, rest} - # 0x0B - # @tds_envtype_enlist_dtc_transaction -> - 0x0C -> << value_size::unsigned-8, @@ -382,7 +400,7 @@ defmodule Tds.Tokens do 0x13 -> << - size::little-uint16(), + size::uint16(), value::binary(size, 16), 0x00, rest::binary @@ -392,11 +410,10 @@ defmodule Tds.Tokens do 0x14 -> << - _routing_data_len::little-uint16(), - # Protocol MUST be 0, specifying TCP-IP protocol + _routing_data_len::uint16(), 0x00, - port::little-uint16(), - alt_host_len::little-uint16(), + port::uint16(), + alt_host_len::uint16(), alt_host::binary(alt_host_len, 16), 0x00, 0x00, @@ -407,7 +424,7 @@ defmodule Tds.Tokens do UCS2.to_string(alt_host) |> String.split("\\") |> case do - [host, instance] -> {host, instance} + [host, inst] -> {host, inst} [host] -> {host, nil} end @@ -425,8 +442,12 @@ defmodule Tds.Tokens do ## DONE defp decode_done( - <>, + << + status::little-unsigned-size(2)-unit(8), + cur_cmd::little-unsigned-size(2)-unit(8), + row_count::little-size(8)-unit(8), + tail::binary + >>, collmetadata ) do status = %{ @@ -463,7 +484,7 @@ defmodule Tds.Tokens do defp decode_loginack( << - _length::little-uint16(), + _length::uint16(), interface::size(8), tds_version::unsigned-32, prog_name_len::size(8), @@ -492,7 +513,11 @@ defmodule Tds.Tokens do {Enum.reverse(acc), tail} end - defp decode_column_order(<>, n, acc) do + defp decode_column_order( + <>, + n, + acc + ) do decode_column_order(tail, n - 1, [col_id | acc]) end @@ -522,13 +547,10 @@ defmodule Tds.Tokens do end defp decode_column(<<_usertype::int32(), _flags::int16(), tail::binary>>) do - {info, tail} = Types.decode_info(tail) + {info, tail} = decode_type_metadata(tail) {name, tail} = decode_column_name(tail) - info = - info - |> Map.put(:name, name) - + info = Map.put(info, :name, name) {info, tail} end @@ -543,31 +565,64 @@ defmodule Tds.Tokens do {Enum.reverse(acc), tail} end - defp decode_row_columns(<>, [column_meta | colmetadata], acc) do - {column, tail} = decode_row_column(data, column_meta) + defp decode_row_columns( + <>, + [column_meta | colmetadata], + acc + ) do + {column, tail} = decode_type_value(column_meta, data) decode_row_columns(tail, colmetadata, [column | acc]) end - defp decode_nbcrow_columns(binary, colmetadata, bitmap, acc \\ []) + defp decode_nbcrow_columns( + binary, + colmetadata, + bitmap, + acc \\ [] + ) defp decode_nbcrow_columns(<>, [], _bitmap, acc) do {Enum.reverse(acc), tail} end - defp decode_nbcrow_columns(<>, colmetadata, bitmap, acc) do + defp decode_nbcrow_columns( + <>, + colmetadata, + bitmap, + acc + ) do [column_meta | colmetadata] = colmetadata [bit | bitmap] = bitmap {column, tail} = case bit do - 0 -> decode_row_column(tail, column_meta) + 0 -> decode_type_value(column_meta, tail) _ -> {nil, tail} end - decode_nbcrow_columns(tail, colmetadata, bitmap, [column | acc]) + decode_nbcrow_columns( + tail, + colmetadata, + bitmap, + [column | acc] + ) + end + + # -- New type system pipeline ---------------------------------------- + + # Decodes type metadata from binary using Registry + handler. + defp decode_type_metadata(<> = bin) do + {:ok, handler} = + Registry.handler_for_code(@registry, type_code) + + {:ok, meta, rest} = handler.decode_metadata(bin) + {Map.put(meta, :handler, handler), rest} end - defp decode_row_column(<>, column_meta) do - Types.decode_data(column_meta, tail) + # Decodes a column value using DataReader + handler.decode. + defp decode_type_value(%{handler: handler} = meta, bin) do + {raw, rest} = DataReader.read(meta.data_reader, bin) + value = handler.decode(raw, meta) + {value, rest} end end diff --git a/lib/tds/type.ex b/lib/tds/type.ex new file mode 100644 index 0000000..acd42a7 --- /dev/null +++ b/lib/tds/type.ex @@ -0,0 +1,48 @@ +defmodule Tds.Type do + @moduledoc """ + Behaviour for TDS type handlers. + + Each handler serves one or more TDS type codes and provides + encode/decode between TDS wire format and Elixir values. + """ + + @type metadata :: map() + + @doc "TDS type codes this handler serves (decode path)." + @callback type_codes() :: [byte()] + + @doc "Atom type names this handler serves (encode path)." + @callback type_names() :: [atom()] + + @doc "Parse type-specific metadata from token stream binary." + @callback decode_metadata(binary()) :: + {:ok, metadata(), rest :: binary()} + + @doc """ + Decode raw value bytes into Elixir value. + + Receives `nil` for SQL NULL (DataReader detected null marker). + Receives raw bytes with length prefix already stripped + by DataReader. + """ + @callback decode(nil | binary(), metadata()) :: term() + + @doc """ + Encode Elixir value to TDS binary for RPC parameter. + + Returns `{type_code, colmetadata_binary, value_binary}`. + """ + @callback encode(term(), metadata()) :: + {type_code :: byte(), meta_bin :: iodata(), value_bin :: iodata()} + + @doc "Generate sp_executesql parameter descriptor string." + @callback param_descriptor(term(), metadata()) :: String.t() + + @doc """ + Type inference: can this handler encode this value? + + Returns `{:ok, metadata}` if yes, `:skip` if not this + handler's type. + """ + @callback infer(term()) :: {:ok, metadata()} | :skip +end diff --git a/lib/tds/type/binary.ex b/lib/tds/type/binary.ex new file mode 100644 index 0000000..e427b87 --- /dev/null +++ b/lib/tds/type/binary.ex @@ -0,0 +1,181 @@ +defmodule Tds.Type.Binary do + @moduledoc """ + TDS type handler for binary values. + + Handles 5 type codes on decode: + - bigbinary (0xAD), bigvarbinary (0xA5) — 2-byte max_length + - legacy binary (0x2D), legacy varbinary (0x25) — 1-byte length + - image (0x22) — longlen with table name parts + + Always encodes as bigvarbinary (0xA5) for parameters. + No character encoding — raw binary passthrough. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + # -- type_codes / type_names ---------------------------------------- + + @impl true + def type_codes do + [ + tds_type(:bigbinary), + tds_type(:bigvarbinary), + tds_type(:image), + tds_type(:binary), + tds_type(:varbinary) + ] + end + + @impl true + def type_names, do: [:binary, :image] + + # -- decode_metadata ------------------------------------------------ + + # Big types: bigbinary (0xAD), bigvarbinary (0xA5) + # 2-byte LE max_length, shortlen or plp (0xFFFF) + @impl true + def decode_metadata(<>) + when type_code in [tds_type(:bigbinary), tds_type(:bigvarbinary)] do + data_reader = if length == 0xFFFF, do: :plp, else: :shortlen + + meta = %{ + data_reader: data_reader, + length: length + } + + {:ok, meta, rest} + end + + # Legacy short types: binary (0x2D), varbinary (0x25) + # 1-byte length, bytelen reader + def decode_metadata(<>) + when type_code in [tds_type(:binary), tds_type(:varbinary)] do + meta = %{ + data_reader: :bytelen, + length: length + } + + {:ok, meta, rest} + end + + # image (0x22): 4-byte length + numparts table names + def decode_metadata( + <> + ) do + rest = skip_table_parts(numparts, rest) + + meta = %{ + data_reader: :longlen, + length: length + } + + {:ok, meta, rest} + end + + # -- decode --------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<>>, _metadata), do: <<>> + + def decode(data, _metadata) do + :binary.copy(data) + end + + # -- encode --------------------------------------------------------- + + @impl true + def encode(value, metadata) when is_integer(value) do + encode(<>, metadata) + end + + def encode(nil, _metadata) do + type = tds_type(:bigvarbinary) + meta_bin = <> + value_bin = <> + {type, meta_bin, value_bin} + end + + def encode(value, _metadata) when is_binary(value) do + type = tds_type(:bigvarbinary) + size = byte_size(value) + + cond do + size == 0 -> + meta_bin = <> + value_bin = <<0::unsigned-64, 0::unsigned-32>> + {type, meta_bin, value_bin} + + size > 8000 -> + meta_bin = <> + value_bin = encode_plp(value) + {type, meta_bin, value_bin} + + true -> + meta_bin = <> + value_bin = <> <> value + {type, meta_bin, value_bin} + end + end + + # -- param_descriptor ----------------------------------------------- + + @impl true + def param_descriptor(value, metadata) when is_integer(value) do + param_descriptor(<>, metadata) + end + + def param_descriptor(nil, _metadata), do: "varbinary(1)" + + def param_descriptor(value, _metadata) when is_binary(value) do + if byte_size(value) <= 0 do + "varbinary(1)" + else + "varbinary(max)" + end + end + + # -- infer ---------------------------------------------------------- + + @impl true + def infer(value) when is_binary(value) do + if String.valid?(value) do + :skip + else + {:ok, %{}} + end + end + + def infer(_value), do: :skip + + # -- private helpers ------------------------------------------------ + + defp skip_table_parts(0, rest), do: rest + + defp skip_table_parts(n, rest) when n > 0 do + <> = rest + + skip_table_parts(n - 1, next) + end + + defp encode_plp(data) do + size = byte_size(data) + + <> <> + encode_plp_chunks(size, data, <<>>) <> + <<0::little-unsigned-32>> + end + + defp encode_plp_chunks(0, _data, buf), do: buf + + defp encode_plp_chunks(size, data, buf) do + <<_hi::unsigned-32, chunk_size::unsigned-32>> = + <> + + <> = data + plp = <> <> chunk + encode_plp_chunks(size - chunk_size, rest, buf <> plp) + end +end diff --git a/lib/tds/type/boolean.ex b/lib/tds/type/boolean.ex new file mode 100644 index 0000000..8df8f0f --- /dev/null +++ b/lib/tds/type/boolean.ex @@ -0,0 +1,51 @@ +defmodule Tds.Type.Boolean do + @moduledoc """ + TDS type handler for boolean values. + + Handles fixed bit (0x32) and variable bitn (0x68) on decode. + Always encodes as bitn (0x68) to support NULL. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @impl true + def type_codes, do: [tds_type(:bit), tds_type(:bitn)] + + @impl true + def type_names, do: [:boolean] + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 1}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen}, rest} + end + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<0x00>>, _metadata), do: false + def decode(_data, _metadata), do: true + + @impl true + def encode(nil, _metadata) do + type = tds_type(:bitn) + {type, <>, <<0x00>>} + end + + def encode(value, _metadata) when is_boolean(value) do + type = tds_type(:bitn) + byte = if value, do: 0x01, else: 0x00 + {type, <>, <<0x01, byte>>} + end + + @impl true + def param_descriptor(_value, _metadata), do: "bit" + + @impl true + def infer(value) when is_boolean(value), do: {:ok, %{}} + def infer(_value), do: :skip +end diff --git a/lib/tds/type/data_reader.ex b/lib/tds/type/data_reader.ex new file mode 100644 index 0000000..c34633c --- /dev/null +++ b/lib/tds/type/data_reader.ex @@ -0,0 +1,85 @@ +defmodule Tds.Type.DataReader do + @moduledoc """ + Reads type-specific value bytes from the TDS token stream. + + Handles six framing strategies. All strategies sever sub-binary + references via `:binary.copy/1` or `IO.iodata_to_binary/1` to + prevent packet buffer memory leaks. + """ + + @spec read(strategy :: term(), binary()) :: + {nil | binary(), rest :: binary()} + + # Fixed-length: size known from type metadata + def read({:fixed, length}, binary) do + <> = binary + {:binary.copy(value), rest} + end + + # Bytelen: 1-byte length prefix, 0x00 = NULL + def read(:bytelen, <<0x00, rest::binary>>), do: {nil, rest} + + def read(:bytelen, <>), + do: {:binary.copy(data), rest} + + # Shortlen: 2-byte LE length prefix, 0xFFFF = NULL + def read(:shortlen, <<0xFF, 0xFF, rest::binary>>), do: {nil, rest} + + def read(:shortlen, <>), + do: {:binary.copy(data), rest} + + # Longlen: text_ptr + timestamp + 4-byte length, 0x00 = NULL + def read(:longlen, <<0x00, rest::binary>>), do: {nil, rest} + + def read( + :longlen, + << + ptr_size::unsigned-8, + _ptr::binary-size(ptr_size), + _timestamp::unsigned-64, + size::little-signed-32, + data::binary-size(size), + rest::binary + >> + ), + do: {:binary.copy(data), rest} + + # Variant: 4-byte LE length prefix, 0x00000000 = NULL + def read(:variant, <<0::little-unsigned-32, rest::binary>>), + do: {nil, rest} + + def read(:variant, <>), + do: {:binary.copy(data), rest} + + # PLP: 8-byte NULL marker or chunked data + def read( + :plp, + << + 0xFF, + 0xFF, + 0xFF, + 0xFF, + 0xFF, + 0xFF, + 0xFF, + 0xFF, + rest::binary + >> + ), + do: {nil, rest} + + def read(:plp, <<_total::little-unsigned-64, rest::binary>>) do + {chunks, rest} = read_plp_chunks(rest, []) + data = :lists.reverse(chunks) |> IO.iodata_to_binary() + {data, rest} + end + + defp read_plp_chunks(<<0::little-unsigned-32, rest::binary>>, acc), + do: {acc, rest} + + defp read_plp_chunks( + <>, + acc + ), + do: read_plp_chunks(rest, [:binary.copy(chunk) | acc]) +end diff --git a/lib/tds/type/datetime.ex b/lib/tds/type/datetime.ex new file mode 100644 index 0000000..ff0ead2 --- /dev/null +++ b/lib/tds/type/datetime.ex @@ -0,0 +1,539 @@ +defmodule Tds.Type.DateTime do + @moduledoc """ + TDS type handler for date and time values. + + Handles seven type codes on decode: + - daten (0x28) — Date + - timen (0x29) — Time with scale + - datetime2n (0x2A) — NaiveDateTime with scale + - datetimeoffsetn (0x2B) — DateTime with timezone offset + - smalldatetime (0x3A) — 4-byte NaiveDateTime (minute precision) + - datetime (0x3D) — 8-byte NaiveDateTime (1/300s precision) + - datetimen (0x6F) — nullable smalldatetime/datetime + + Always returns Elixir calendar structs: Date, Time, + NaiveDateTime, or DateTime. No tuple format. + + Encodes Date as daten, Time as timen, NaiveDateTime as + datetime2n, and DateTime as datetimeoffsetn. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @year_1900_days :calendar.date_to_gregorian_days({1900, 1, 1}) + @secs_in_min 60 + @secs_in_hour 60 * @secs_in_min + @max_time_scale 7 + + @daten_code tds_type(:daten) + @timen_code tds_type(:timen) + @datetime2n_code tds_type(:datetime2n) + @datetimeoffsetn_code tds_type(:datetimeoffsetn) + @smalldatetime_code tds_type(:smalldatetime) + @datetime_code tds_type(:datetime) + @datetimen_code tds_type(:datetimen) + + # -- type_codes / type_names ----------------------------------------- + + @impl true + def type_codes do + [ + @daten_code, + @timen_code, + @datetime2n_code, + @datetimeoffsetn_code, + @smalldatetime_code, + @datetime_code, + @datetimen_code + ] + end + + @impl true + def type_names do + [:date, :time, :datetime, :datetime2, :smalldatetime, :datetimeoffset] + end + + # -- decode_metadata ------------------------------------------------- + + @impl true + def decode_metadata(<<@daten_code, rest::binary>>) do + {:ok, %{data_reader: :bytelen, type_code: @daten_code}, rest} + end + + def decode_metadata(<<@timen_code, scale::unsigned-8, rest::binary>>) do + meta = %{ + data_reader: :bytelen, + scale: scale, + type_code: @timen_code + } + + {:ok, meta, rest} + end + + def decode_metadata(<<@datetime2n_code, scale::unsigned-8, rest::binary>>) do + meta = %{ + data_reader: :bytelen, + scale: scale, + type_code: @datetime2n_code + } + + {:ok, meta, rest} + end + + def decode_metadata(<<@datetimeoffsetn_code, scale::unsigned-8, rest::binary>>) do + meta = %{ + data_reader: :bytelen, + scale: scale, + type_code: @datetimeoffsetn_code + } + + {:ok, meta, rest} + end + + def decode_metadata(<<@smalldatetime_code, rest::binary>>) do + meta = %{ + data_reader: {:fixed, 4}, + type_code: @smalldatetime_code + } + + {:ok, meta, rest} + end + + def decode_metadata(<<@datetime_code, rest::binary>>) do + meta = %{ + data_reader: {:fixed, 8}, + type_code: @datetime_code + } + + {:ok, meta, rest} + end + + def decode_metadata(<<@datetimen_code, length::unsigned-8, rest::binary>>) do + meta = %{ + data_reader: :bytelen, + length: length, + type_code: @datetimen_code + } + + {:ok, meta, rest} + end + + # -- decode ---------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + + def decode(data, %{type_code: @daten_code}), + do: decode_date(data) + + def decode(data, %{type_code: @timen_code} = m), + do: decode_time(m.scale, data) + + def decode(data, %{type_code: @smalldatetime_code}), + do: decode_smalldatetime(data) + + def decode(data, %{type_code: @datetime_code}), + do: decode_datetime(data) + + def decode(data, %{type_code: @datetimen_code, length: 4}), + do: decode_smalldatetime(data) + + def decode(data, %{type_code: @datetimen_code, length: 8}), + do: decode_datetime(data) + + def decode(data, %{type_code: @datetime2n_code} = m), + do: decode_datetime2(m.scale, data) + + def decode(data, %{type_code: @datetimeoffsetn_code} = m), + do: decode_datetimeoffset(m.scale, data) + + # -- encode ---------------------------------------------------------- + + @impl true + def encode(nil, %{type: :date}) do + {tds_type(:daten), <>, <<0x00>>} + end + + def encode(%Date{} = date, %{type: :date}) do + data = encode_date(date) + {tds_type(:daten), <>, [<<0x03>>, data]} + end + + def encode(nil, %{type: :time}) do + type = tds_type(:timen) + {type, <>, <<0x00>>} + end + + def encode(%Time{} = time, %{type: :time}) do + type = tds_type(:timen) + {data, scale} = encode_time(time) + len = time_byte_length(scale) + {type, <>, [<>, data]} + end + + def encode(nil, %{type: :datetime2}) do + type = tds_type(:datetime2n) + {type, <>, <<0x00>>} + end + + def encode(%NaiveDateTime{} = ndt, %{type: :datetime2}) do + type = tds_type(:datetime2n) + {data, scale} = encode_datetime2(ndt) + len = time_byte_length(scale) + 3 + {type, <>, [<>, data]} + end + + def encode(nil, %{type: :datetimeoffset}) do + type = tds_type(:datetimeoffsetn) + {type, <>, <<0x00>>} + end + + def encode(%DateTime{} = dt, %{type: :datetimeoffset}) do + type = tds_type(:datetimeoffsetn) + {_, scale} = dt.microsecond + data = encode_datetimeoffset(dt, scale) + len = time_byte_length(scale) + 3 + 2 + {type, <>, [<>, data]} + end + + # Legacy datetime (8-byte datetimen) + def encode(nil, %{type: :datetime}) do + type = tds_type(:datetimen) + {type, <>, <<0x00>>} + end + + def encode(value, %{type: :datetime}) do + ndt = to_naive_datetime(value) + type = tds_type(:datetimen) + data = encode_legacy_datetime(ndt) + {type, <>, [<<0x08>>, data]} + end + + # Legacy smalldatetime (4-byte datetimen) + def encode(nil, %{type: :smalldatetime}) do + type = tds_type(:datetimen) + {type, <>, <<0x00>>} + end + + def encode(value, %{type: :smalldatetime}) do + ndt = to_naive_datetime(value) + type = tds_type(:datetimen) + data = encode_smalldatetime(ndt) + {type, <>, [<<0x04>>, data]} + end + + # Tuple format: {y,m,d} + def encode({_, _, _} = date_tuple, %{type: :date} = meta) do + encode(Date.from_erl!(date_tuple), meta) + end + + # Tuple format: {h,m,s} -> default scale 7 + def encode({h, m, s}, %{type: :time}) do + encode_time_tuple({h, m, s, 0}, @max_time_scale) + end + + # Tuple format: {h,m,s,fsec} -> scale 7 (100ns units) + def encode({h, m, s, fsec}, %{type: :time}) do + encode_time_tuple({h, m, s, fsec}, @max_time_scale) + end + + # Tuple format: {{y,m,d},{h,m,s}} or {{y,m,d},{h,m,s,fsec}} + def encode({date, time}, %{type: :datetime2}) + when is_tuple(date) do + encode_dt2_tuple(date, time, @max_time_scale) + end + + # Tuple format: {{y,m,d},{h,m,s|{h,m,s,us}},offset_min} + # Replicates old Tds.Types behavior: sends time/date as-is + # with the offset appended, no UTC conversion. + def encode( + {{_, _, _} = d, time, offset_min}, + %{type: :datetimeoffset} + ) do + type = tds_type(:datetimeoffsetn) + time_tuple = normalize_time_tuple(time) + {time_bin, scale} = encode_time_raw(time_tuple, @max_time_scale) + date_bin = encode_date(Date.from_erl!(d)) + data = time_bin <> date_bin <> <> + len = time_byte_length(scale) + 3 + 2 + {type, <>, [<>, data]} + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(_value, %{type: :date}), do: "date" + + def param_descriptor(%Time{microsecond: {_, s}}, %{type: :time}), + do: "time(#{s})" + + def param_descriptor(_value, %{type: :time}), do: "time" + + def param_descriptor(_value, %{type: :datetime}), do: "datetime" + + def param_descriptor(_value, %{type: :smalldatetime}), + do: "smalldatetime" + + def param_descriptor( + %NaiveDateTime{microsecond: {_, s}}, + %{type: :datetime2} + ), + do: "datetime2(#{s})" + + def param_descriptor(_value, %{type: :datetime2}), do: "datetime2" + + def param_descriptor( + %DateTime{microsecond: {_, s}}, + %{type: :datetimeoffset} + ), + do: "datetimeoffset(#{s})" + + def param_descriptor(_value, %{type: :datetimeoffset}), + do: "datetimeoffset" + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(%Date{}), do: {:ok, %{type: :date}} + def infer(%Time{}), do: {:ok, %{type: :time}} + def infer(%NaiveDateTime{}), do: {:ok, %{type: :datetime2}} + def infer(%DateTime{}), do: {:ok, %{type: :datetimeoffset}} + def infer(_value), do: :skip + + # -- private: tuple encoding ----------------------------------------- + + defp encode_time_tuple(time_tuple, scale) do + type = tds_type(:timen) + {data, scale} = encode_time_raw(time_tuple, scale) + len = time_byte_length(scale) + {type, <>, [<>, data]} + end + + defp encode_dt2_tuple(date, time, scale) do + type = tds_type(:datetime2n) + time_tuple = normalize_time_tuple(time) + {time_bin, scale} = encode_time_raw(time_tuple, scale) + date_bin = encode_date(Date.from_erl!(date)) + len = time_byte_length(scale) + 3 + {type, <>, [<>, time_bin <> date_bin]} + end + + defp normalize_time_tuple({h, m, s}), do: {h, m, s, 0} + defp normalize_time_tuple({_, _, _, _} = t), do: t + + # Convert NaiveDateTime to tuple format for legacy datetime + defp to_naive_datetime(%NaiveDateTime{} = ndt), do: ndt + + defp to_naive_datetime({{y, m, d}, {h, mi, s}}) do + NaiveDateTime.from_erl!({{y, m, d}, {h, mi, s}}) + end + + defp to_naive_datetime({{y, m, d}, {h, mi, s, us}}) + when us < 1_000_000 do + NaiveDateTime.from_erl!( + {{y, m, d}, {h, mi, s}}, + {us, 6} + ) + end + + # fsec >= 1_000_000 means 100ns units (scale 7), convert + defp to_naive_datetime({{y, m, d}, {h, mi, s, fsec}}) do + us = div(fsec, 10) + + NaiveDateTime.from_erl!( + {{y, m, d}, {h, mi, s}}, + {us, 6} + ) + end + + # -- private: date --------------------------------------------------- + + defp decode_date(<>) do + date = :calendar.gregorian_days_to_date(days + 366) + Date.from_erl!(date, Calendar.ISO) + end + + defp encode_date(%Date{} = date) do + days = + date + |> Date.to_erl() + |> :calendar.date_to_gregorian_days() + |> Kernel.-(366) + + <> + end + + # -- private: smalldatetime ------------------------------------------ + + defp decode_smalldatetime(<>) do + date = :calendar.gregorian_days_to_date(@year_1900_days + days) + hour = div(mins, 60) + min = mins - hour * 60 + NaiveDateTime.from_erl!({date, {hour, min, 0}}) + end + + defp encode_smalldatetime(%NaiveDateTime{} = ndt) do + {date, {hour, min, _}} = NaiveDateTime.to_erl(ndt) + days = :calendar.date_to_gregorian_days(date) - @year_1900_days + mins = hour * 60 + min + <> + end + + # -- private: datetime ----------------------------------------------- + + defp decode_datetime(<>) do + date = :calendar.gregorian_days_to_date(@year_1900_days + days) + milliseconds = round(secs300 * 10 / 3) + usec = rem(milliseconds, 1_000) + seconds = div(milliseconds, 1_000) + {_, {h, m, s}} = :calendar.seconds_to_daystime(seconds) + + NaiveDateTime.from_erl!( + {date, {h, m, s}}, + {usec * 1_000, 3}, + Calendar.ISO + ) + end + + defp encode_legacy_datetime(%NaiveDateTime{} = ndt) do + {date, {h, m, s}} = NaiveDateTime.to_erl(ndt) + {us, _} = ndt.microsecond + days = :calendar.date_to_gregorian_days(date) - @year_1900_days + milliseconds = ((h * 60 + m) * 60 + s) * 1_000 + us / 1_000 + secs_300 = round(milliseconds / (10 / 3)) + + {days, secs_300} = + if secs_300 == 25_920_000 do + {days + 1, 0} + else + {days, secs_300} + end + + <> + end + + # -- private: time --------------------------------------------------- + + defp decode_time(scale, fsec_bin) do + parsed_fsec = parse_time_fsec(scale, fsec_bin) + fs_per_sec = trunc(:math.pow(10, scale)) + + hour = trunc(parsed_fsec / fs_per_sec / @secs_in_hour) + rem1 = parsed_fsec - hour * @secs_in_hour * fs_per_sec + + min = trunc(rem1 / fs_per_sec / @secs_in_min) + rem2 = rem1 - min * @secs_in_min * fs_per_sec + + sec = trunc(rem2 / fs_per_sec) + frac = trunc(rem2 - sec * fs_per_sec) + + {usec, out_scale} = fsec_to_microsecond(frac, scale) + Time.from_erl!({hour, min, sec}, {usec, out_scale}) + end + + defp parse_time_fsec(scale, bin) when scale in [0, 1, 2] do + <> = bin + val + end + + defp parse_time_fsec(scale, bin) when scale in [3, 4] do + <> = bin + val + end + + defp parse_time_fsec(scale, bin) when scale in [5, 6, 7] do + <> = bin + val + end + + defp fsec_to_microsecond(frac, scale) when scale > 6 do + {trunc(frac / 10), 6} + end + + defp fsec_to_microsecond(frac, scale) do + {trunc(frac * :math.pow(10, 6 - scale)), scale} + end + + defp encode_time(%Time{} = t) do + {h, m, s} = Time.to_erl(t) + {_, scale} = t.microsecond + fsec = microsecond_to_fsec(t.microsecond) + encode_time_raw({h, m, s, fsec}, scale) + end + + defp encode_time_raw({hour, min, sec, fsec}, scale) do + fs_per_sec = trunc(:math.pow(10, scale)) + + total = + hour * 3600 * fs_per_sec + + min * 60 * fs_per_sec + + sec * fs_per_sec + + fsec + + bin = + cond do + scale < 3 -> <> + scale < 5 -> <> + true -> <> + end + + {bin, scale} + end + + defp microsecond_to_fsec({us, 6}), do: us + + defp microsecond_to_fsec({us, scale}), + do: trunc(us / :math.pow(10, 6 - scale)) + + # -- private: datetime2 ---------------------------------------------- + + defp decode_datetime2(scale, data) do + tlen = time_byte_length(scale) + <> = data + date = decode_date(date_bin) + time = decode_time(scale, time_bin) + NaiveDateTime.new!(date, time) + end + + defp encode_datetime2(%NaiveDateTime{} = value) do + t = NaiveDateTime.to_time(value) + {time_bin, scale} = encode_time(t) + date_bin = encode_date(NaiveDateTime.to_date(value)) + {time_bin <> date_bin, scale} + end + + # -- private: datetimeoffset ----------------------------------------- + + defp decode_datetimeoffset(scale, data) do + tlen = time_byte_length(scale) + dt2_len = tlen + 3 + + <> = data + + # Wire stores UTC time + offset. Return UTC DateTime + # (same as old Tds.Types behavior) so roundtrip is stable. + naive_utc = decode_datetime2(scale, dt2_bin) + DateTime.from_naive!(naive_utc, "Etc/UTC") + end + + defp encode_datetimeoffset(%DateTime{utc_offset: offset} = dt, scale) do + {dt2_bin, _} = + dt + |> DateTime.add(-offset) + |> DateTime.to_naive() + |> encode_ndt_with_scale(scale) + + offset_min = div(offset, 60) + dt2_bin <> <> + end + + defp encode_ndt_with_scale(%NaiveDateTime{} = ndt, scale) do + {h, m, s} = NaiveDateTime.to_erl(ndt) |> elem(1) + fsec = microsecond_to_fsec(ndt.microsecond) + {time_bin, scale} = encode_time_raw({h, m, s, fsec}, scale) + date_bin = encode_date(NaiveDateTime.to_date(ndt)) + {time_bin <> date_bin, scale} + end +end diff --git a/lib/tds/type/decimal.ex b/lib/tds/type/decimal.ex new file mode 100644 index 0000000..33078ef --- /dev/null +++ b/lib/tds/type/decimal.ex @@ -0,0 +1,148 @@ +defmodule Tds.Type.Decimal do + @moduledoc """ + TDS type handler for decimal/numeric values. + + Handles legacy decimal (0x37), numeric (0x3F) and modern + decimaln (0x6A), numericn (0x6C) on decode. + Always encodes as decimaln (0x6A) to support NULL. + + Precision and scale come from metadata, never from + the process dictionary. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @impl true + def type_codes do + [ + tds_type(:decimal), + tds_type(:numeric), + tds_type(:decimaln), + tds_type(:numericn) + ] + end + + @impl true + def type_names, do: [:decimal, :numeric] + + # -- decode_metadata ----------------------------------------------- + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, precision: p, scale: s}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, precision: p, scale: s}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, length: len}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, length: len}, rest} + end + + # -- decode -------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + + def decode(<>, metadata) do + size = byte_size(value) + <> = value + scale = Map.get(metadata, :scale, 0) + + case sign do + 1 -> Decimal.new(1, coef, -scale) + 0 -> Decimal.new(-1, coef, -scale) + end + end + + # -- encode -------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:decimaln) + {type, <>, <<0x00>>} + end + + def encode(%Decimal{} = value, _metadata) do + {precision, scale} = compute_precision_scale(value) + type = tds_type(:decimaln) + + sign = if value.sign == 1, do: 1, else: 0 + coef_int = wire_coefficient(value, scale) + + coef_bytes = :binary.encode_unsigned(coef_int, :little) + coef_size = byte_size(coef_bytes) + data_len = data_length(precision) + padding = data_len - coef_size + value_size = data_len + 1 + padded = coef_bytes <> <<0::size(padding)-unit(8)>> + + meta = <> + val = <> <> padded + {type, meta, val} + end + + # -- param_descriptor ---------------------------------------------- + + @impl true + def param_descriptor(nil, _metadata), do: "decimal(1, 0)" + + def param_descriptor(%Decimal{} = value, _metadata) do + {precision, scale} = compute_precision_scale(value) + "decimal(#{precision}, #{scale})" + end + + # -- infer --------------------------------------------------------- + + @impl true + def infer(%Decimal{}), do: {:ok, %{}} + def infer(_value), do: :skip + + # -- private ------------------------------------------------------- + + defp compute_precision_scale(%Decimal{coef: coef, exp: exp}) do + coef_digits = digit_count(coef) + + if exp >= 0 do + {coef_digits + exp, 0} + else + scale = -exp + int_digits = max(coef_digits + exp, 1) + {int_digits + scale, scale} + end + end + + defp wire_coefficient(%Decimal{coef: coef, exp: exp}, scale) do + # The wire integer is: abs(value) * 10^scale + # which equals coef * 10^(exp + scale) + shift = exp + scale + coef * pow10(shift) + end + + defp digit_count(0), do: 1 + + defp digit_count(n) when is_integer(n) and n > 0 do + n |> Integer.to_string() |> byte_size() + end + + defp pow10(0), do: 1 + defp pow10(n) when n > 0, do: 10 ** n + + defp data_length(precision) when precision <= 9, do: 4 + defp data_length(precision) when precision <= 19, do: 8 + defp data_length(precision) when precision <= 28, do: 12 + defp data_length(precision) when precision <= 38, do: 16 + + defp data_length(precision) do + raise ArgumentError, + "size (#{precision}) given to the type " <> + "'decimal' exceeds the maximum allowed (38)" + end +end diff --git a/lib/tds/type/float.ex b/lib/tds/type/float.ex new file mode 100644 index 0000000..041e36b --- /dev/null +++ b/lib/tds/type/float.ex @@ -0,0 +1,71 @@ +defmodule Tds.Type.Float do + @moduledoc """ + TDS type handler for floating-point values. + + Handles fixed real (0x3B, 4-byte float-32), fixed float (0x3E, + 8-byte float-64) and variable floatn (0x6D) on decode. + Always encodes as floatn (0x6D) with 8-byte float-64 to + support NULL. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @impl true + def type_codes do + [tds_type(:real), tds_type(:float), tds_type(:floatn)] + end + + @impl true + def type_names, do: [:float] + + # -- decode_metadata ----------------------------------------------- + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 4}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 8}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, length: length}, rest} + end + + # -- decode -------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + + def decode(<>, _metadata), do: val + + def decode(<>, _metadata), do: val + + # -- encode -------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:floatn) + {type, <>, <<0x00>>} + end + + def encode(value, _metadata) when is_float(value) do + type = tds_type(:floatn) + {type, <>, <<0x08, value::little-float-64>>} + end + + # -- param_descriptor ---------------------------------------------- + + @impl true + def param_descriptor(nil, _metadata), do: "decimal(1,0)" + def param_descriptor(_value, _metadata), do: "float(53)" + + # -- infer --------------------------------------------------------- + + @impl true + def infer(value) when is_float(value), do: {:ok, %{}} + def infer(_value), do: :skip +end diff --git a/lib/tds/type/integer.ex b/lib/tds/type/integer.ex new file mode 100644 index 0000000..293f1ed --- /dev/null +++ b/lib/tds/type/integer.ex @@ -0,0 +1,132 @@ +defmodule Tds.Type.Integer do + @moduledoc """ + TDS type handler for integer values. + + Handles fixed tinyint (0x30), smallint (0x34), int (0x38), + bigint (0x7F) and variable intn (0x26) on decode. + Always encodes as intn (0x26) to support NULL. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @impl true + def type_codes do + [ + tds_type(:null), + tds_type(:tinyint), + tds_type(:smallint), + tds_type(:int), + tds_type(:bigint), + tds_type(:intn) + ] + end + + @impl true + def type_names, do: [:integer] + + # -- decode_metadata ----------------------------------------------- + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 0}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 1}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 2}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 4}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 8}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, length: length}, rest} + end + + # -- decode -------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<>>, _metadata), do: nil + + def decode(<>, _metadata), do: val + + def decode(<>, _metadata), do: val + + def decode(<>, _metadata), do: val + + def decode(<>, _metadata), do: val + + # -- encode -------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:intn) + {type, <>, <<0x00>>} + end + + def encode(value, _metadata) when is_integer(value) do + type = tds_type(:intn) + size = wire_size(value) + {type, <>, [<>, encode_value(value, size)]} + end + + # -- param_descriptor ---------------------------------------------- + + @impl true + def param_descriptor(nil, _metadata), do: "int" + + def param_descriptor(0, _metadata), do: "int" + + def param_descriptor(value, _metadata) when value >= 1, do: "bigint" + + def param_descriptor(value, _metadata) when value < 0 do + precision = + value + |> Integer.to_string() + |> String.length() + |> Kernel.-(1) + + "decimal(#{precision}, 0)" + end + + # -- infer --------------------------------------------------------- + + @impl true + def infer(value) when is_integer(value), do: {:ok, %{}} + def infer(_value), do: :skip + + # -- private ------------------------------------------------------- + + defp wire_size(value) + when value in -2_147_483_648..2_147_483_647, + do: 4 + + defp wire_size(value) + when value in -9_223_372_036_854_775_808..9_223_372_036_854_775_807, + do: 8 + + defp wire_size(value) do + raise ArgumentError, + "integer #{value} exceeds 64-bit range; " <> + "use Decimal.new/1 instead" + end + + defp encode_value(value, 4) do + <> + end + + defp encode_value(value, 8) do + <> + end +end diff --git a/lib/tds/type/money.ex b/lib/tds/type/money.ex new file mode 100644 index 0000000..bf01006 --- /dev/null +++ b/lib/tds/type/money.ex @@ -0,0 +1,102 @@ +defmodule Tds.Type.Money do + @moduledoc """ + TDS type handler for money values. + + Handles fixed money (0x3C, 8 bytes), fixed smallmoney (0x7A, 4 bytes), + and variable moneyn (0x6E) on decode. + + Returns `%Decimal{}` instead of float for exact representation + of monetary values. This is a breaking change from the old + Tds.Types module which returned floats. + + Wire format: + - smallmoney: 4-byte little-endian signed integer (1/10000 units) + - money: 8 bytes, high 4 bytes then low 4 bytes (both LE), + reinterpreted as a signed-64 value (1/10000 units) + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + @max_smallmoney_units 2_147_483_647 + @min_smallmoney_units -2_147_483_648 + + @impl true + def type_codes do + [tds_type(:money), tds_type(:smallmoney), tds_type(:moneyn)] + end + + @impl true + def type_names, do: [:money, :smallmoney] + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 8}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: {:fixed, 4}}, rest} + end + + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen, length: length}, rest} + end + + @impl true + def decode(nil, _metadata), do: nil + + def decode(<>, _metadata) do + sign = if val < 0, do: -1, else: 1 + Decimal.new(sign, abs(val), -4) + end + + def decode( + <>, + _metadata + ) do + <> = <> + sign = if combined < 0, do: -1, else: 1 + Decimal.new(sign, abs(combined), -4) + end + + @impl true + def encode(nil, _metadata) do + type = tds_type(:moneyn) + {type, <>, <<0x00>>} + end + + def encode(%Decimal{} = dec, _metadata) do + type = tds_type(:moneyn) + units = decimal_to_units(dec) + <> = <> + + {type, <>, <<0x08, high::little-unsigned-32, low::little-unsigned-32>>} + end + + @impl true + def param_descriptor(nil, _metadata), do: "money" + + def param_descriptor(%Decimal{} = dec, _metadata) do + units = decimal_to_units(dec) + + if units >= @min_smallmoney_units and + units <= @max_smallmoney_units do + "smallmoney" + else + "money" + end + end + + @impl true + def infer(_value), do: :skip + + defp decimal_to_units(%Decimal{sign: sign, coef: coef, exp: exp}) do + scale_shift = exp + 4 + raw = if scale_shift >= 0, do: coef * pow10(scale_shift), else: div(coef, pow10(-scale_shift)) + if sign == -1, do: -raw, else: raw + end + + defp pow10(0), do: 1 + defp pow10(n) when n > 0, do: 10 * pow10(n - 1) +end diff --git a/lib/tds/type/registry.ex b/lib/tds/type/registry.ex new file mode 100644 index 0000000..ede48ce --- /dev/null +++ b/lib/tds/type/registry.ex @@ -0,0 +1,140 @@ +defmodule Tds.Type.Registry do + @moduledoc """ + Maps TDS type codes and Elixir type names to handler modules. + + Stored in connection state, built once at connection init. + User-provided handler modules override built-in handlers + for the same type codes or names. + """ + + @type t :: %__MODULE__{ + by_code: %{byte() => module()}, + by_name: %{atom() => module()}, + user_types: [module()] + } + + @enforce_keys [:by_code, :by_name, :user_types] + defstruct [:by_code, :by_name, :user_types] + + @doc """ + Build registry from user-provided and built-in handler lists. + + User handlers override built-ins for the same type code or name + because `builtin_types ++ extra_types` means later entries win + in the map comprehension. + """ + @spec new( + extra_types :: [module()], + builtin_types :: [module()] + ) :: t() + def new(extra_types \\ [], builtin_types \\ default_builtins()) do + all = builtin_types ++ extra_types + + by_code = + for handler <- all, + code <- handler.type_codes(), + into: %{}, + do: {code, handler} + + by_name = + for handler <- all, + name <- handler.type_names(), + into: %{}, + do: {name, handler} + + %__MODULE__{ + by_code: by_code, + by_name: by_name, + user_types: extra_types + } + end + + @doc "Decode path: type code -> handler module." + @spec handler_for_code(t(), byte()) :: {:ok, module()} | :error + def handler_for_code(%__MODULE__{by_code: by_code}, code) do + Map.fetch(by_code, code) + end + + @doc "Encode path: atom type name -> handler module." + @spec handler_for_name(t(), atom()) :: {:ok, module()} | :error + def handler_for_name(%__MODULE__{by_name: by_name}, name) do + Map.fetch(by_name, name) + end + + @doc """ + Encode path: infer handler from Elixir value. + + Tries user types first (linear scan), then falls back + to guard-based type name lookup in the by_name map. + """ + @spec infer(t(), term()) :: + {:ok, module(), Tds.Type.metadata()} | :error + def infer(%__MODULE__{} = reg, value) do + case try_handlers(reg.user_types, value) do + {:ok, _mod, _meta} = hit -> + hit + + :skip -> + infer_from_name(reg.by_name, value) + end + end + + defp infer_from_name(by_name, value) do + name = value_to_type_name(value) + + case Map.fetch(by_name, name) do + {:ok, handler} -> call_infer(handler, value) + :error -> :error + end + end + + # Boolean MUST come before integer (booleans are integers) + defp value_to_type_name(v) when is_boolean(v), do: :boolean + defp value_to_type_name(v) when is_integer(v), do: :integer + defp value_to_type_name(v) when is_float(v), do: :float + + defp value_to_type_name(v) when is_binary(v) do + if String.valid?(v), do: :string, else: :binary + end + + defp value_to_type_name(%Decimal{}), do: :decimal + defp value_to_type_name(%Date{}), do: :date + defp value_to_type_name(%Time{}), do: :time + defp value_to_type_name(%NaiveDateTime{}), do: :datetime2 + defp value_to_type_name(%DateTime{}), do: :datetimeoffset + defp value_to_type_name(nil), do: :binary + defp value_to_type_name(_), do: nil + + defp call_infer(handler, value) do + case handler.infer(value) do + {:ok, meta} -> {:ok, handler, meta} + :skip -> :error + end + end + + defp try_handlers([], _value), do: :skip + + defp try_handlers([handler | rest], value) do + case handler.infer(value) do + {:ok, meta} -> {:ok, handler, meta} + :skip -> try_handlers(rest, value) + end + end + + defp default_builtins do + [ + Tds.Type.Boolean, + Tds.Type.Integer, + Tds.Type.Float, + Tds.Type.Decimal, + Tds.Type.Money, + Tds.Type.String, + Tds.Type.Binary, + Tds.Type.DateTime, + Tds.Type.UUID, + Tds.Type.Xml, + Tds.Type.Variant, + Tds.Type.Udt + ] + end +end diff --git a/lib/tds/type/string.ex b/lib/tds/type/string.ex new file mode 100644 index 0000000..a7b488c --- /dev/null +++ b/lib/tds/type/string.ex @@ -0,0 +1,231 @@ +defmodule Tds.Type.String do + @moduledoc """ + TDS type handler for string values. + + Handles 8 type codes on decode: + - bigchar (0xAF), bigvarchar (0xA7) — single-byte with collation + - nvarchar (0xE7), nchar (0xEF) — UCS-2 (UTF-16LE) + - legacy varchar (0x27), legacy char (0x2F) — single-byte short + - text (0x23), ntext (0x63) — longlen with table name parts + + Always encodes as nvarchar for parameters (UCS-2). + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + alias Tds.Encoding.UCS2 + alias Tds.Protocol.Collation + + @null_collation <<0x00, 0x00, 0x00, 0x00, 0x00>> + + # UCS-2 type codes (decode as UTF-16LE) + @ucs2_types [tds_type(:nvarchar), tds_type(:nchar), tds_type(:ntext)] + + @impl true + def type_codes do + [ + tds_type(:bigchar), + tds_type(:bigvarchar), + tds_type(:nvarchar), + tds_type(:nchar), + tds_type(:text), + tds_type(:varchar), + tds_type(:char), + tds_type(:ntext) + ] + end + + @impl true + def type_names, do: [:string] + + # -- decode_metadata ------------------------------------------------- + + # Big types: bigvarchar (0xA7), bigchar (0xAF), nvarchar (0xE7), + # nchar (0xEF) — 2-byte LE max_length + 5-byte collation + @impl true + def decode_metadata( + <> + ) + when type_code in [ + tds_type(:bigvarchar), + tds_type(:bigchar), + tds_type(:nvarchar), + tds_type(:nchar) + ] do + {:ok, collation} = Collation.decode(collation_bin) + data_reader = if length == 0xFFFF, do: :plp, else: :shortlen + encoding = encoding_for(type_code) + + meta = %{ + data_reader: data_reader, + collation: collation, + encoding: encoding, + length: length + } + + {:ok, meta, rest} + end + + # Legacy short types: varchar (0x27), char (0x2F) — 1-byte length + # + 5-byte collation + def decode_metadata(<>) + when type_code in [tds_type(:varchar), tds_type(:char)] do + {:ok, collation} = Collation.decode(collation_bin) + + meta = %{ + data_reader: :bytelen, + collation: collation, + encoding: :single_byte, + length: length + } + + {:ok, meta, rest} + end + + # text (0x23) — 4-byte length + collation + numparts table names + def decode_metadata( + <> + ) do + {:ok, collation} = Collation.decode(collation_bin) + rest = skip_table_parts(numparts, rest) + + meta = %{ + data_reader: :longlen, + collation: collation, + encoding: :single_byte, + length: length + } + + {:ok, meta, rest} + end + + # ntext (0x63) — 4-byte length + collation + numparts table names + def decode_metadata( + <> + ) do + {:ok, collation} = Collation.decode(collation_bin) + rest = skip_table_parts(numparts, rest) + + meta = %{ + data_reader: :longlen, + collation: collation, + encoding: :ucs2, + length: length + } + + {:ok, meta, rest} + end + + # -- decode ---------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + + def decode(<<>>, _metadata), do: "" + + def decode(data, %{encoding: :ucs2}) do + UCS2.to_string(data) + end + + def decode(data, %{encoding: :single_byte, collation: col}) do + Tds.Utils.decode_chars(data, col.codepage) + end + + # -- encode ---------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:nvarchar) + meta_bin = <> <> @null_collation + value_bin = <> + {type, meta_bin, value_bin} + end + + def encode(value, _metadata) when is_binary(value) do + type = tds_type(:nvarchar) + ucs2 = UCS2.from_string(value) + ucs2_size = byte_size(ucs2) + + cond do + ucs2_size == 0 -> + meta_bin = <> <> @null_collation + value_bin = <<0::unsigned-64, 0::unsigned-32>> + {type, meta_bin, value_bin} + + ucs2_size > 8000 -> + meta_bin = <> <> @null_collation + value_bin = encode_plp(ucs2) + {type, meta_bin, value_bin} + + true -> + meta_bin = + <> <> + @null_collation + + value_bin = + <> <> ucs2 + + {type, meta_bin, value_bin} + end + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(nil, _metadata), do: "nvarchar(1)" + + def param_descriptor(value, _metadata) when is_binary(value) do + len = String.length(value) + + cond do + len <= 0 -> "nvarchar(1)" + len <= 2_000 -> "nvarchar(2000)" + true -> "nvarchar(max)" + end + end + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(value) when is_binary(value), do: {:ok, %{}} + def infer(_value), do: :skip + + # -- private helpers ------------------------------------------------- + + defp encoding_for(type_code) when type_code in @ucs2_types, + do: :ucs2 + + defp encoding_for(_type_code), do: :single_byte + + defp skip_table_parts(0, rest), do: rest + + defp skip_table_parts(n, rest) when n > 0 do + <> = rest + + skip_table_parts(n - 1, next) + end + + defp encode_plp(data) do + size = byte_size(data) + + <> <> + encode_plp_chunks(size, data, <<>>) <> + <<0::little-unsigned-32>> + end + + defp encode_plp_chunks(0, _data, buf), do: buf + + defp encode_plp_chunks(size, data, buf) do + # Use lower 32 bits of size as chunk size (matches Tds.Types) + <<_hi::unsigned-32, chunk_size::unsigned-32>> = + <> + + <> = data + plp = <> <> chunk + encode_plp_chunks(size - chunk_size, rest, buf <> plp) + end +end diff --git a/lib/tds/type/udt.ex b/lib/tds/type/udt.ex new file mode 100644 index 0000000..9aec491 --- /dev/null +++ b/lib/tds/type/udt.ex @@ -0,0 +1,111 @@ +defmodule Tds.Type.Udt do + @moduledoc """ + TDS type handler for CLR User-Defined Type values. + + Handles 1 type code on decode: + - udt (0xF0) -- 2-byte LE max_length, shortlen or PLP + + UDT values are passed through as raw binary. Application code + (e.g. Ecto custom types) is responsible for interpreting the + binary payload. Built-in UDT types like HierarchyId are also + returned as raw bytes. + + Always encodes as bigvarbinary (0xA5) for parameters. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + # -- type_codes / type_names ---------------------------------------- + + @impl true + def type_codes, do: [tds_type(:udt)] + + @impl true + def type_names, do: [:udt] + + # -- decode_metadata ------------------------------------------------ + + @impl true + def decode_metadata(<>) do + data_reader = if length == 0xFFFF, do: :plp, else: :shortlen + + meta = %{ + data_reader: data_reader, + length: length + } + + {:ok, meta, rest} + end + + # -- decode --------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<>>, _metadata), do: <<>> + def decode(data, _metadata), do: :binary.copy(data) + + # -- encode --------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:bigvarbinary) + meta_bin = <> + value_bin = <> + {type, meta_bin, value_bin} + end + + def encode(value, _metadata) when is_binary(value) do + type = tds_type(:bigvarbinary) + size = byte_size(value) + + cond do + size == 0 -> + meta_bin = <> + value_bin = <<0::unsigned-64, 0::unsigned-32>> + {type, meta_bin, value_bin} + + size > 8000 -> + meta_bin = <> + value_bin = encode_plp(value) + {type, meta_bin, value_bin} + + true -> + meta_bin = <> + value_bin = <> <> value + {type, meta_bin, value_bin} + end + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(_value, _metadata), do: "varbinary(max)" + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(_value), do: :skip + + # -- private helpers ------------------------------------------------- + + defp encode_plp(data) do + size = byte_size(data) + + <> <> + encode_plp_chunks(size, data, <<>>) <> + <<0::little-unsigned-32>> + end + + defp encode_plp_chunks(0, _data, buf), do: buf + + defp encode_plp_chunks(size, data, buf) do + <<_hi::unsigned-32, chunk_size::unsigned-32>> = + <> + + <> = data + plp = <> <> chunk + encode_plp_chunks(size - chunk_size, rest, buf <> plp) + end +end diff --git a/lib/tds/type/uuid.ex b/lib/tds/type/uuid.ex new file mode 100644 index 0000000..2ae62a9 --- /dev/null +++ b/lib/tds/type/uuid.ex @@ -0,0 +1,135 @@ +defmodule Tds.Type.UUID do + @moduledoc """ + TDS type handler for UUID (uniqueidentifier) values. + + Tds.Types.UUID generates and works with mixed-endian bytes. + This handler sends and receives bytes as-is to preserve + roundtrip compatibility with that module. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + # -- type_codes / type_names ----------------------------------------- + + @impl true + def type_codes, do: [tds_type(:uniqueidentifier)] + + @impl true + def type_names, do: [:uuid] + + # -- decode_metadata ------------------------------------------------- + + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: :bytelen}, rest} + end + + # -- decode ---------------------------------------------------------- + + # Tds.Types.UUID works in mixed-endian format. Bytes are stored + # and returned without reordering to preserve existing roundtrip. + @impl true + def decode(nil, _metadata), do: nil + def decode(data, _metadata), do: :binary.copy(data) + + # -- encode ---------------------------------------------------------- + + # Bytes are sent as-is (no reorder) to match Tds.Types.UUID + # which generates mixed-endian bytes. + @impl true + def encode(nil, _metadata) do + type = tds_type(:uniqueidentifier) + {type, <>, <<0x00>>} + end + + def encode(<<_::128>> = bin, _metadata) do + type = tds_type(:uniqueidentifier) + {type, <>, <<0x10>> <> bin} + end + + def encode( + <<_::64, ?-, _::32, ?-, _::32, ?-, _::32, ?-, _::96>> = str, + metadata + ) do + encode(parse_uuid_string(str), metadata) + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(_value, _metadata), do: "uniqueidentifier" + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(<<_::128>>), do: {:ok, %{}} + def infer(_value), do: :skip + + # -- private helpers ------------------------------------------------- + + defp parse_uuid_string( + <> + ) do + << + hex(a1)::4, + hex(a2)::4, + hex(a3)::4, + hex(a4)::4, + hex(a5)::4, + hex(a6)::4, + hex(a7)::4, + hex(a8)::4, + hex(b1)::4, + hex(b2)::4, + hex(b3)::4, + hex(b4)::4, + hex(c1)::4, + hex(c2)::4, + hex(c3)::4, + hex(c4)::4, + hex(d1)::4, + hex(d2)::4, + hex(d3)::4, + hex(d4)::4, + hex(e1)::4, + hex(e2)::4, + hex(e3)::4, + hex(e4)::4, + hex(e5)::4, + hex(e6)::4, + hex(e7)::4, + hex(e8)::4, + hex(e9)::4, + hex(e10)::4, + hex(e11)::4, + hex(e12)::4 + >> + end + + @compile {:inline, hex: 1} + defp hex(?0), do: 0 + defp hex(?1), do: 1 + defp hex(?2), do: 2 + defp hex(?3), do: 3 + defp hex(?4), do: 4 + defp hex(?5), do: 5 + defp hex(?6), do: 6 + defp hex(?7), do: 7 + defp hex(?8), do: 8 + defp hex(?9), do: 9 + defp hex(?a), do: 10 + defp hex(?b), do: 11 + defp hex(?c), do: 12 + defp hex(?d), do: 13 + defp hex(?e), do: 14 + defp hex(?f), do: 15 + defp hex(?A), do: 10 + defp hex(?B), do: 11 + defp hex(?C), do: 12 + defp hex(?D), do: 13 + defp hex(?E), do: 14 + defp hex(?F), do: 15 +end diff --git a/lib/tds/type/variant.ex b/lib/tds/type/variant.ex new file mode 100644 index 0000000..4a2ac17 --- /dev/null +++ b/lib/tds/type/variant.ex @@ -0,0 +1,65 @@ +defmodule Tds.Type.Variant do + @moduledoc """ + TDS type handler for sql_variant values (stub). + + Handles 1 type code on decode: + - variant (0x62) -- 4-byte LE max_length, variant data reader + + This is a stub handler. Decode returns raw binary without + inner-type dispatch. Full variant decoding (reading the inner + type code and delegating to the appropriate handler) is deferred. + + Encoding sql_variant parameters is not supported by TDS RPC, + so encode raises at runtime. + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + # -- type_codes / type_names ---------------------------------------- + + @impl true + def type_codes, do: [tds_type(:variant)] + + @impl true + def type_names, do: [:variant] + + # -- decode_metadata ------------------------------------------------ + + @impl true + def decode_metadata(<>) do + meta = %{ + data_reader: :variant, + length: length + } + + {:ok, meta, rest} + end + + # -- decode ---------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<>>, _metadata), do: <<>> + def decode(data, _metadata), do: :binary.copy(data) + + # -- encode ---------------------------------------------------------- + + @impl true + def encode(_value, _metadata) do + raise RuntimeError, + "sql_variant encoding is not supported. " <> + "TDS does not allow sql_variant as an RPC parameter type." + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(_value, _metadata), do: "sql_variant" + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(_value), do: :skip +end diff --git a/lib/tds/type/xml.ex b/lib/tds/type/xml.ex new file mode 100644 index 0000000..cedd4c7 --- /dev/null +++ b/lib/tds/type/xml.ex @@ -0,0 +1,128 @@ +defmodule Tds.Type.Xml do + @moduledoc """ + TDS type handler for XML values. + + Handles 1 type code on decode: + - xml (0xF1) — PLP with optional schema info + + Metadata includes a schema presence byte. If schema is present, + db_name, owner_name, and collection_name are read and discarded + (not needed for decode/encode). + + Always encodes as nvarchar for parameters (UCS-2 PLP). + """ + + @behaviour Tds.Type + + import Tds.Protocol.Constants + + alias Tds.Encoding.UCS2 + + @null_collation <<0x00, 0x00, 0x00, 0x00, 0x00>> + + # -- type_codes / type_names ---------------------------------------- + + @impl true + def type_codes, do: [tds_type(:xml)] + + @impl true + def type_names, do: [:xml] + + # -- decode_metadata ------------------------------------------------ + + # No schema (0x00): just the presence byte + @impl true + def decode_metadata(<>) do + {:ok, %{data_reader: :plp}, rest} + end + + # With schema (0x01): read and discard db, owner, collection + def decode_metadata(<>) do + rest = skip_schema_info(rest) + {:ok, %{data_reader: :plp}, rest} + end + + # -- decode --------------------------------------------------------- + + @impl true + def decode(nil, _metadata), do: nil + def decode(<<>>, _metadata), do: "" + def decode(data, _metadata), do: UCS2.to_string(data) + + # -- encode --------------------------------------------------------- + + @impl true + def encode(nil, _metadata) do + type = tds_type(:nvarchar) + meta_bin = <> <> @null_collation + value_bin = <> + {type, meta_bin, value_bin} + end + + def encode(value, _metadata) when is_binary(value) do + type = tds_type(:nvarchar) + ucs2 = UCS2.from_string(value) + ucs2_size = byte_size(ucs2) + + cond do + ucs2_size == 0 -> + meta_bin = <> <> @null_collation + value_bin = <<0::unsigned-64, 0::unsigned-32>> + {type, meta_bin, value_bin} + + ucs2_size > plp(:max_short_data_size) -> + meta_bin = <> <> @null_collation + value_bin = encode_plp(ucs2) + {type, meta_bin, value_bin} + + true -> + meta_bin = + <> <> + @null_collation + + value_bin = + <> <> ucs2 + + {type, meta_bin, value_bin} + end + end + + # -- param_descriptor ------------------------------------------------ + + @impl true + def param_descriptor(_value, _metadata), do: "xml" + + # -- infer ----------------------------------------------------------- + + @impl true + def infer(_value), do: :skip + + # -- private helpers ------------------------------------------------- + + defp skip_schema_info(binary) do + <> = binary + + rest + end + + defp encode_plp(data) do + size = byte_size(data) + + <> <> + encode_plp_chunks(size, data, <<>>) <> + <<0::little-unsigned-32>> + end + + defp encode_plp_chunks(0, _data, buf), do: buf + + defp encode_plp_chunks(size, data, buf) do + <<_hi::unsigned-32, chunk_size::unsigned-32>> = + <> + + <> = data + plp = <> <> chunk + encode_plp_chunks(size - chunk_size, rest, buf <> plp) + end +end diff --git a/lib/tds/types.ex b/lib/tds/types.ex deleted file mode 100644 index 8fa72c6..0000000 --- a/lib/tds/types.ex +++ /dev/null @@ -1,1883 +0,0 @@ -defmodule Tds.Types do - @moduledoc false - - import Tds.BinaryUtils - import Tds.Utils - - alias Tds.Encoding.UCS2 - alias Tds.Parameter - - @year_1900_days :calendar.date_to_gregorian_days({1900, 1, 1}) - @secs_in_min 60 - @secs_in_hour 60 * @secs_in_min - @max_time_scale 7 - - # Zero Length Data Types - @tds_data_type_null 0x1F - - # Fixed Length Data Types - # See: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/859eb3d2-80d3-40f6-a637-414552c9c552 - @tds_data_type_tinyint 0x30 - @tds_data_type_bit 0x32 - @tds_data_type_smallint 0x34 - @tds_data_type_int 0x38 - @tds_data_type_smalldatetime 0x3A - @tds_data_type_real 0x3B - @tds_data_type_money 0x3C - @tds_data_type_datetime 0x3D - @tds_data_type_float 0x3E - @tds_data_type_smallmoney 0x7A - @tds_data_type_bigint 0x7F - - # Fixed Data Types with their length - @fixed_data_types %{ - @tds_data_type_null => 0, - @tds_data_type_tinyint => 1, - @tds_data_type_bit => 1, - @tds_data_type_smallint => 2, - @tds_data_type_int => 4, - @tds_data_type_smalldatetime => 4, - @tds_data_type_real => 4, - @tds_data_type_money => 8, - @tds_data_type_datetime => 8, - @tds_data_type_float => 8, - @tds_data_type_smallmoney => 4, - @tds_data_type_bigint => 8 - } - - # Variable-Length Data Types - # See: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/ce3183a6-9d89-47e8-a02f-de5a1a1303de - @tds_data_type_uniqueidentifier 0x24 - @tds_data_type_intn 0x26 - # legacy - @tds_data_type_decimal 0x37 - # legacy - @tds_data_type_numeric 0x3F - @tds_data_type_bitn 0x68 - @tds_data_type_decimaln 0x6A - @tds_data_type_numericn 0x6C - @tds_data_type_floatn 0x6D - @tds_data_type_moneyn 0x6E - @tds_data_type_datetimen 0x6F - @tds_data_type_daten 0x28 - @tds_data_type_timen 0x29 - @tds_data_type_datetime2n 0x2A - @tds_data_type_datetimeoffsetn 0x2B - @tds_data_type_char 0x2F - @tds_data_type_varchar 0x27 - @tds_data_type_binary 0x2D - @tds_data_type_varbinary 0x25 - @tds_data_type_bigvarbinary 0xA5 - @tds_data_type_bigvarchar 0xA7 - @tds_data_type_bigbinary 0xAD - @tds_data_type_bigchar 0xAF - @tds_data_type_nvarchar 0xE7 - @tds_data_type_nchar 0xEF - @tds_data_type_xml 0xF1 - @tds_data_type_udt 0xF0 - @tds_data_type_text 0x23 - @tds_data_type_image 0x22 - @tds_data_type_ntext 0x63 - @tds_data_type_variant 0x62 - - @variable_data_types [ - @tds_data_type_uniqueidentifier, - @tds_data_type_intn, - @tds_data_type_decimal, - @tds_data_type_numeric, - @tds_data_type_bitn, - @tds_data_type_decimaln, - @tds_data_type_numericn, - @tds_data_type_floatn, - @tds_data_type_moneyn, - @tds_data_type_datetimen, - @tds_data_type_daten, - @tds_data_type_timen, - @tds_data_type_datetime2n, - @tds_data_type_datetimeoffsetn, - @tds_data_type_char, - @tds_data_type_varchar, - @tds_data_type_binary, - @tds_data_type_varbinary, - @tds_data_type_bigvarbinary, - @tds_data_type_bigvarchar, - @tds_data_type_bigbinary, - @tds_data_type_bigchar, - @tds_data_type_nvarchar, - @tds_data_type_nchar, - @tds_data_type_xml, - @tds_data_type_udt, - @tds_data_type_text, - @tds_data_type_image, - @tds_data_type_ntext, - @tds_data_type_variant - ] - - # @tds_plp_marker 0xffff - @tds_plp_null 0xFFFFFFFFFFFFFFFF - # @tds_plp_unknown 0xfffffffffffffffe - - # - # Data Type Decoders - # - - def to_atom(token) do - case token do - @tds_data_type_null -> :null - @tds_data_type_tinyint -> :tinyint - @tds_data_type_bit -> :bit - @tds_data_type_smallint -> :smallint - @tds_data_type_int -> :int - @tds_data_type_smalldatetime -> :smalldatetime - @tds_data_type_real -> :real - @tds_data_type_money -> :money - @tds_data_type_datetime -> :datetime - @tds_data_type_float -> :float - @tds_data_type_smallmoney -> :smallmoney - @tds_data_type_bigint -> :bigint - @tds_data_type_uniqueidentifier -> :uniqueidentifier - @tds_data_type_intn -> :intn - @tds_data_type_decimal -> :decimal - @tds_data_type_numeric -> :numeric - @tds_data_type_bitn -> :bitn - @tds_data_type_decimaln -> :decimaln - @tds_data_type_numericn -> :numericn - @tds_data_type_floatn -> :floatn - @tds_data_type_moneyn -> :moneyn - @tds_data_type_datetimen -> :datetimen - @tds_data_type_daten -> :daten - @tds_data_type_timen -> :timen - @tds_data_type_datetime2n -> :datetime2n - @tds_data_type_datetimeoffsetn -> :datetimeoffsetn - @tds_data_type_char -> :char - @tds_data_type_varchar -> :varchar - @tds_data_type_binary -> :binary - @tds_data_type_varbinary -> :varbinary - @tds_data_type_bigvarbinary -> :bigvarbinary - @tds_data_type_bigvarchar -> :bigvarchar - @tds_data_type_bigbinary -> :bigbinary - @tds_data_type_bigchar -> :bigchar - @tds_data_type_nvarchar -> :nvarchar - @tds_data_type_nchar -> :nchar - @tds_data_type_xml -> :xml - @tds_data_type_udt -> :udt - @tds_data_type_text -> :text - @tds_data_type_image -> :image - @tds_data_type_ntext -> :ntext - @tds_data_type_variant -> :variant - end - end - - def decode_info(<>) - when is_map_key(@fixed_data_types, data_type_code) do - {%{ - data_type: :fixed, - data_type_code: data_type_code, - length: @fixed_data_types[data_type_code], - data_type_name: to_atom(data_type_code) - }, tail} - end - - def decode_info(<>) - when user_type in @variable_data_types do - def_type_info = %{ - data_type: :variable, - data_type_code: user_type, - sql_type: to_atom(user_type) - } - - cond do - user_type == @tds_data_type_daten -> - length = 3 - - type_info = - def_type_info - |> Map.put(:length, length) - |> Map.put(:data_reader, :bytelen) - - {type_info, tail} - - user_type in [ - @tds_data_type_timen, - @tds_data_type_datetime2n, - @tds_data_type_datetimeoffsetn - ] -> - <> = tail - - length = - cond do - scale in [0, 1, 2] -> 3 - scale in [3, 4] -> 4 - scale in [5, 6, 7] -> 5 - true -> nil - end - - length = - case user_type do - @tds_data_type_datetime2n -> length + 3 - @tds_data_type_datetimeoffsetn -> length + 5 - _ -> length - end - - type_info = - def_type_info - |> Map.put(:scale, scale) - |> Map.put(:length, length) - |> Map.put(:data_reader, :bytelen) - - {type_info, rest} - - user_type in [ - @tds_data_type_numericn, - @tds_data_type_decimaln - ] -> - << - length::little-unsigned-8, - precision::unsigned-8, - scale::unsigned-8, - rest::binary - >> = tail - - type_info = - def_type_info - |> Map.put(:precision, precision) - |> Map.put(:scale, scale) - |> Map.put(:length, length) - |> Map.put(:data_reader, :bytelen) - - {type_info, rest} - - user_type in [ - @tds_data_type_uniqueidentifier, - @tds_data_type_intn, - @tds_data_type_decimal, - @tds_data_type_numeric, - @tds_data_type_bitn, - @tds_data_type_floatn, - @tds_data_type_moneyn, - @tds_data_type_datetimen, - @tds_data_type_binary, - @tds_data_type_varbinary - ] -> - <> = tail - - type_info = - def_type_info - |> Map.put(:length, length) - |> Map.put(:data_reader, :bytelen) - - {type_info, rest} - - user_type in [ - @tds_data_type_char, - @tds_data_type_varchar - ] -> - <> = tail - {:ok, collation} = decode_collation(collation) - - type_info = - def_type_info - |> Map.put(:length, length) - |> Map.put(:data_reader, :bytelen) - |> Map.put(:collation, collation) - - {type_info, rest} - - user_type == @tds_data_type_xml -> - {_schema_info, rest} = decode_schema_info(tail) - - type_info = - def_type_info - |> Map.put(:data_reader, :plp) - - {type_info, rest} - - user_type in [ - @tds_data_type_bigvarchar, - @tds_data_type_bigchar, - @tds_data_type_nvarchar, - @tds_data_type_nchar - ] -> - <> = tail - {:ok, collation} = decode_collation(collation) - - type_info = - def_type_info - |> Map.put(:collation, collation) - |> Map.put( - :data_reader, - if(length == 0xFFFF, do: :plp, else: :shortlen) - ) - |> Map.put(:length, length) - - {type_info, rest} - - user_type in [ - @tds_data_type_bigvarbinary, - @tds_data_type_bigbinary, - @tds_data_type_udt - ] -> - <> = tail - - type_info = - def_type_info - |> Map.put( - :data_reader, - if(length == 0xFFFF, do: :plp, else: :shortlen) - ) - |> Map.put(:length, length) - - {type_info, rest} - - user_type in [@tds_data_type_text, @tds_data_type_ntext] -> - << - length::little-unsigned-32, - collation::binary-5, - numparts::signed-8, - rest::binary - >> = tail - - {:ok, collation} = decode_collation(collation) - - type_info = - def_type_info - |> Map.put(:collation, collation) - |> Map.put(:data_reader, :longlen) - |> Map.put(:length, length) - - rest = - Enum.reduce( - 1..numparts, - rest, - fn _, - <> -> - next_rest - end - ) - - {type_info, rest} - - user_type == @tds_data_type_image -> - # TODO NumParts Reader - <> = tail - - rest = - Enum.reduce( - 1..numparts, - rest, - fn _, - <> -> - next - end - ) - - type_info = - def_type_info - |> Map.put(:length, length) - |> Map.put(:data_reader, :longlen) - - {type_info, rest} - - user_type == @tds_data_type_variant -> - <> = tail - - type_info = - def_type_info - |> Map.put(:length, length) - |> Map.put(:data_reader, :variant) - - {type_info, rest} - end - end - - @spec decode_collation(binpart :: <<_::40>>) :: - {:ok, Tds.Protocol.Collation.t()} - | {:error, :more} - | {:error, any} - defdelegate decode_collation(binpart), - to: Tds.Protocol.Collation, - as: :decode - - # - # Data Decoders - # - def decode_data( - %{data_type: :fixed, data_type_code: data_type_code, length: length}, - <> - ) do - <> = tail - - value = - case data_type_code do - @tds_data_type_null -> - nil - - @tds_data_type_bit -> - value_binary != <<0x00>> - - @tds_data_type_smalldatetime -> - decode_smalldatetime(value_binary) - - @tds_data_type_smallmoney -> - decode_smallmoney(value_binary) - - @tds_data_type_real -> - <> = value_binary - Float.round(val, 4) - - @tds_data_type_datetime -> - decode_datetime(value_binary) - - @tds_data_type_float -> - <> = value_binary - Float.round(val, 8) - - @tds_data_type_money -> - decode_money(value_binary) - - _ -> - <> = value_binary - val - end - - {value, tail} - end - - # ByteLength Types - def decode_data(%{data_reader: :bytelen}, <<0x00, tail::binary>>), - do: {nil, tail} - - def decode_data( - %{ - data_type_code: data_type_code, - data_reader: :bytelen, - length: length - } = data_info, - <> - ) do - value = - cond do - data_type_code == @tds_data_type_daten -> - decode_date(data) - - data_type_code == @tds_data_type_timen -> - decode_time(data_info[:scale], data) - - data_type_code == @tds_data_type_datetime2n -> - decode_datetime2(data_info[:scale], data) - - data_type_code == @tds_data_type_datetimeoffsetn -> - decode_datetimeoffset(data_info[:scale], data) - - data_type_code == @tds_data_type_uniqueidentifier -> - decode_uuid(:binary.copy(data)) - - data_type_code == @tds_data_type_intn -> - case length do - 1 -> - <> = data - val - - 2 -> - <> = data - val - - 4 -> - <> = data - val - - 8 -> - <> = data - val - end - - data_type_code in [ - @tds_data_type_decimal, - @tds_data_type_numeric, - @tds_data_type_decimaln, - @tds_data_type_numericn - ] -> - decode_decimal(data_info[:precision], data_info[:scale], data) - - data_type_code == @tds_data_type_bitn -> - data != <<0x00>> - - data_type_code == @tds_data_type_floatn -> - len = length * 8 - <> = data - val - - data_type_code == @tds_data_type_moneyn -> - case length do - 4 -> decode_smallmoney(data) - 8 -> decode_money(data) - end - - data_type_code == @tds_data_type_datetimen -> - case length do - 4 -> decode_smalldatetime(data) - 8 -> decode_datetime(data) - end - - data_type_code in [ - @tds_data_type_char, - @tds_data_type_varchar - ] -> - decode_char(data_info, data) - - data_type_code in [ - @tds_data_type_binary, - @tds_data_type_varbinary - ] -> - :binary.copy(data) - end - - {value, tail} - end - - # ShortLength Types - def decode_data(%{data_reader: :shortlen}, <<0xFF, 0xFF, tail::binary>>), - do: {nil, tail} - - def decode_data( - %{data_type_code: data_type_code, data_reader: :shortlen} = data_info, - <> - ) do - value = - cond do - data_type_code in [ - @tds_data_type_bigvarchar, - @tds_data_type_bigchar - ] -> - decode_char(data_info, data) - - data_type_code in [ - @tds_data_type_bigvarbinary, - @tds_data_type_bigbinary - ] -> - :binary.copy(data) - - data_type_code in [ - @tds_data_type_nvarchar, - @tds_data_type_nchar - ] -> - decode_nchar(data_info, data) - - data_type_code == @tds_data_type_udt -> - decode_udt(data_info, :binary.copy(data)) - end - - {value, tail} - end - - def decode_data(%{data_reader: :longlen}, <<0x00, tail::binary>>), - do: {nil, tail} - - def decode_data( - %{data_type_code: data_type_code, data_reader: :longlen} = data_info, - << - text_ptr_size::unsigned-8, - _text_ptr::size(text_ptr_size)-unit(8), - _timestamp::unsigned-64, - size::little-signed-32, - data::binary-size(size)-unit(8), - tail::binary - >> - ) do - value = - case data_type_code do - @tds_data_type_text -> decode_char(data_info, data) - @tds_data_type_ntext -> decode_nchar(data_info, data) - @tds_data_type_image -> :binary.copy(data) - _ -> nil - end - - {value, tail} - end - - # TODO Variant Types - - def decode_data(%{data_reader: :plp}, << - @tds_plp_null::little-unsigned-64, - tail::binary - >>), - do: {nil, tail} - - def decode_data( - %{data_type_code: data_type_code, data_reader: :plp} = data_info, - <<_size::little-unsigned-64, tail::binary>> - ) do - {data, tail} = decode_plp_chunk(tail, <<>>) - - value = - cond do - data_type_code == @tds_data_type_xml -> - decode_xml(data_info, data) - - data_type_code in [ - @tds_data_type_bigvarchar, - @tds_data_type_bigchar, - @tds_data_type_text - ] -> - decode_char(data_info, data) - - data_type_code in [ - @tds_data_type_bigvarbinary, - @tds_data_type_bigbinary, - @tds_data_type_image - ] -> - data - - data_type_code in [ - @tds_data_type_nvarchar, - @tds_data_type_nchar, - @tds_data_type_ntext - ] -> - decode_nchar(data_info, data) - - data_type_code == @tds_data_type_udt -> - decode_udt(data_info, data) - end - - {value, tail} - end - - def decode_plp_chunk(<>, buf) - when chunksize == 0, - do: {buf, tail} - - def decode_plp_chunk( - << - chunksize::little-unsigned-32, - chunk::binary-size(chunksize)-unit(8), - tail::binary - >>, - buf - ) do - decode_plp_chunk(tail, buf <> :binary.copy(chunk)) - end - - def decode_smallmoney(<>) do - Float.round(money * 0.0001, 4) - end - - def decode_money(<< - money_m::little-unsigned-32, - money_l::little-unsigned-32 - >>) do - <> = <> - Float.round(money * 0.0001, 4) - end - - # UUID - def decode_uuid(<<_::128>> = bin), do: bin - - def encode_uuid(<<_::64, ?-, _::32, ?-, _::32, ?-, _::32, ?-, _::96>> = string) do - raise ArgumentError, - "trying to load string UUID as Tds.Types.UUID: #{inspect(string)}. " <> - "Maybe you wanted to declare :uuid as your database field?" - end - - def encode_uuid(<<_::128>> = bin), do: bin - - def encode_uuid(any), - do: raise(ArgumentError, "Invalid uuid value #{inspect(any)}") - - # Decimal - def decode_decimal(precision, scale, <>) do - set_decimal_precision(precision) - - size = byte_size(value) - <> = value - - case sign do - 0 -> Decimal.new(-1, value, -scale) - 1 -> Decimal.new(1, value, -scale) - _ -> raise ArgumentError, "Sign value out of range. Expected 0 or 1, got #{inspect(sign)}" - end - end - - def decode_char(data_info, <>) do - Tds.Utils.decode_chars(data, data_info.collation.codepage) - end - - def decode_nchar(_data_info, <>) do - UCS2.to_string(data) - end - - def decode_xml(_data_info, <>) do - UCS2.to_string(data) - end - - def decode_udt(%{}, <>) do - # UDT, if used, should be decoded by app that uses it, - # tho we could've registered UDT types on connection - # Example could be ecto, where custom type is created - # special case are built in udt types such as HierarchyId - data - end - - @doc """ - Data Type Encoders - Encodes the COLMETADATA for the data type - """ - def encode_data_type(%Parameter{type: type} = param) when type != nil do - case type do - :boolean -> encode_binary_type(param) - :binary -> encode_binary_type(param) - :string -> encode_string_type(param) - :integer -> encode_integer_type(param) - :decimal -> encode_decimal_type(param) - :numeric -> encode_decimal_type(param) - :float -> encode_float_type(param) - :smalldatetime -> encode_smalldatetime_type(param) - :datetime -> encode_datetime_type(param) - :datetime2 -> encode_datetime2_type(param) - :datetimeoffset -> encode_datetimeoffset_type(param) - :date -> encode_date_type(param) - :time -> encode_time_type(param) - :uuid -> encode_uuid_type(param) - :image -> encode_image_type(param) - _ -> encode_string_type(param) - end - end - - def encode_data_type(param), - do: param |> Parameter.fix_data_type() |> encode_data_type() - - def encode_binary_type(%Parameter{value: value} = param) - when value == "" do - encode_string_type(param) - end - - def encode_binary_type(%Parameter{value: value} = param) - when is_integer(value) do - %{param | value: <>} |> encode_binary_type - end - - def encode_binary_type(%Parameter{value: value}) do - length = length_for_binary(value) - type = @tds_data_type_bigvarbinary - data = <> <> length - {type, data, []} - end - - defp length_for_binary(nil), do: <<0xFF, 0xFF>> - - defp length_for_binary(value) do - case byte_size(value) do - # varbinary(max) - value_size when value_size > 8000 -> <<0xFF, 0xFF>> - value_size -> <> - end - end - - def encode_bit_type(%Parameter{}) do - type = @tds_data_type_bigvarbinary - data = <> - {type, data, []} - end - - def encode_uuid_type(%Parameter{value: value}) do - length = - if value == nil do - 0x00 - else - 0x10 - end - - type = @tds_data_type_uniqueidentifier - data = <> - {type, data, []} - end - - def encode_image_type(%Parameter{value: value}) do - length = - if value == nil do - 0x00 - else - byte_size(value) - end - - type = @tds_data_type_image - data = <> - {type, data, []} - end - - def encode_string_type(%Parameter{value: value}) do - collation = <<0x00, 0x00, 0x00, 0x00, 0x00>> - - length = - if value != nil do - value = value |> UCS2.from_string() - value_size = byte_size(value) - - if value_size == 0 or value_size > 8000 do - <<0xFF, 0xFF>> - else - <> - end - else - <<0xFF, 0xFF>> - end - - type = @tds_data_type_nvarchar - data = <> <> length <> collation - {type, data, [collation: collation]} - end - - # def encode_integer_type(%Parameter{value: value} = param) - # when value < 0 do - # encode_decimal_type(Decima.new(param)) - # end - - def encode_integer_type(%Parameter{value: value}) do - attributes = [] - type = @tds_data_type_intn - - {attributes, length} = - if value == nil do - attributes = - attributes - |> Keyword.put(:length, 4) - - value_size = int_type_size(value) - {attributes, <>} - else - value_size = int_type_size(value) - # cond do - # value_size == 1 -> - # data_type_code = @tds_data_type_tinyint - # Enum.find(data_types, fn(x) -> x[:name] == :tinyint end) - # value_size == 2 -> - # data_type_code = @tds_data_type_smallint - # Enum.find(data_types, fn(x) -> x[:name] == :smallint end) - # value_size > 2 and value_size <= 4 -> - # data_type_code = @tds_data_type_int - # Enum.find(data_types, fn(x) -> x[:name] == :int end) - # value_size > 4 and value_size <= 8 -> - # data_type_code = @tds_data_type_bigint - # Enum.find(data_types, fn(x) -> x[:name] == :bigint end) - # end - attributes = - attributes - |> Keyword.put(:length, value_size) - - {attributes, <>} - end - - data = <> <> length - {type, data, attributes} - end - - def encode_decimal_type(%Parameter{value: nil} = param) do - encode_binary_type(param) - end - - def encode_decimal_type(%Parameter{value: value}) do - set_decimal_precision(38) - - value_list = - value - |> Decimal.abs() - |> Decimal.to_string(:normal) - |> String.split(".") - - {precision, scale} = - case value_list do - [p, s] -> - {String.length(p) + String.length(s), String.length(s)} - - [p] -> - {String.length(p), 0} - end - - dec_abs = - value - |> Decimal.abs() - - value = - dec_abs.coef - |> :binary.encode_unsigned(:little) - - value_size = byte_size(value) - - len = - cond do - precision <= 9 -> 4 - precision <= 19 -> 8 - precision <= 28 -> 12 - precision <= 38 -> 16 - end - - padding = len - value_size - value_size = value_size + padding + 1 - - type = @tds_data_type_decimaln - data = <> - {type, data, precision: precision, scale: scale} - end - - def encode_float_type(%Parameter{value: nil} = param) do - encode_decimal_type(param) - end - - def encode_float_type(%Parameter{value: value} = param) - when is_float(value) do - encode_float_type(%{param | value: Decimal.from_float(value)}) - end - - def encode_float_type(%Parameter{value: %Decimal{} = value}) do - set_decimal_precision(38) - - value_list = - value - |> Decimal.abs() - |> Decimal.to_string(:normal) - |> String.split(".") - - {precision, scale} = - case value_list do - [p, s] -> - {String.length(p) + String.length(s), String.length(s)} - - [p] -> - {String.length(p), 0} - end - - dec_abs = - value - |> Decimal.abs() - - value = - dec_abs.coef - |> :binary.encode_unsigned(:little) - - value_size = byte_size(value) - - # keep max precision - len = 8 - # cond do - # precision <= 9 -> 4 - # precision <= 19 -> 8 - # end - - padding = len - value_size - value_size = value_size + padding - - type = @tds_data_type_floatn - data = <> - {type, data, precision: precision, scale: scale} - end - - @doc """ - Creates the Parameter Descriptor for the selected type - """ - def encode_param_descriptor(%Parameter{name: name, value: value, type: type} = param) - when type != nil do - desc = - case type do - :uuid -> - "uniqueidentifier" - - :datetime -> - "datetime" - - :datetime2 -> - case value do - %NaiveDateTime{microsecond: {_, scale}} -> - "datetime2(#{scale})" - - _ -> - "datetime2" - end - - :datetimeoffset -> - case value do - %DateTime{microsecond: {_, s}} -> - "datetimeoffset(#{s})" - - _ -> - "datetimeoffset" - end - - :date -> - "date" - - :time -> - case value do - %Time{microsecond: {_, scale}} -> - "time(#{scale})" - - _ -> - "time" - end - - :smalldatetime -> - "smalldatetime" - - :binary -> - encode_binary_descriptor(value) - - :string -> - cond do - is_nil(value) -> "nvarchar(1)" - String.length(value) <= 0 -> "nvarchar(1)" - String.length(value) <= 2_000 -> "nvarchar(2000)" - true -> "nvarchar(max)" - end - - :varchar -> - cond do - is_nil(value) -> "varchar(1)" - String.length(value) <= 0 -> "varchar(1)" - String.length(value) <= 2_000 -> "varchar(2000)" - true -> "varchar(max)" - end - - :integer -> - case value do - 0 -> - "int" - - val when val >= 1 -> - "bigint" - - _ -> - precision = - value - |> Integer.to_string() - |> String.length() - - "decimal(#{precision - 1}, 0)" - end - - :bigint -> - "bigint" - - :decimal -> - encode_decimal_descriptor(param) - - :numeric -> - encode_decimal_descriptor(param) - - :float -> - encode_float_descriptor(param) - - :boolean -> - "bit" - - :image -> - "image" - - _ -> - # this should fix issues when column is varchar but parameter - # is threated as nvarchar(..) since nothing defines parameter - # as varchar. - latin1 = :unicode.characters_to_list(value || "", :latin1) - utf8 = :unicode.characters_to_list(value || "", :utf8) - - db_type = - if latin1 == utf8, - do: "varchar", - else: "nvarchar" - - # this is same .net driver uses in order to avoid too many - # cached execution plans, it must be always same length otherwise it will - # use too much memory in sql server to cache each plan per param size - cond do - is_nil(value) -> "#{db_type}(1)" - String.length(value) <= 0 -> "#{db_type}(1)" - String.length(value) <= 2_000 -> "#{db_type}(2000)" - true -> "#{db_type}(max)" - end - end - - "#{name} #{desc}" - end - - # nil - def encode_param_descriptor(param), - do: param |> Parameter.fix_data_type() |> encode_param_descriptor() - - @doc """ - Decimal Type Parameter Descriptor - """ - def encode_decimal_descriptor(%Parameter{value: nil}), - do: encode_binary_descriptor(nil) - - def encode_decimal_descriptor(%Parameter{value: value} = param) - when is_float(value) do - encode_decimal_descriptor(%{param | value: Decimal.from_float(value)}) - end - - def encode_decimal_descriptor(%Parameter{value: value} = param) - when is_binary(value) or is_integer(value) do - encode_decimal_descriptor(%{param | value: Decimal.new(value)}) - end - - def encode_decimal_descriptor(%Parameter{value: %Decimal{} = dec}) do - set_decimal_precision(38) - - value_list = - dec - |> Decimal.abs() - |> Decimal.to_string(:normal) - |> String.split(".") - - {precision, scale} = - case value_list do - [p, s] -> - {String.length(p) + String.length(s), String.length(s)} - - [p] -> - {String.length(p), 0} - end - - "decimal(#{precision}, #{scale})" - end - - # Decimal.new/0 is undefined -- modifying params to hopefully fix - def encode_decimal_descriptor(%Parameter{type: :decimal, value: value} = param) do - encode_decimal_descriptor(%{param | value: Decimal.new(value)}) - end - - @doc """ - Float Type Parameter Descriptor - """ - def encode_float_descriptor(%Parameter{value: nil}), do: "decimal(1,0)" - - def encode_float_descriptor(%Parameter{value: value} = param) - when is_float(value) do - param - |> Map.put(:value, Decimal.from_float(value)) - |> encode_float_descriptor - end - - def encode_float_descriptor(%Parameter{value: %Decimal{}}), do: "float(53)" - - @doc """ - Binary Type Parameter Descriptor - """ - def encode_binary_descriptor(value) when is_integer(value), - do: encode_binary_descriptor(<>) - - def encode_binary_descriptor(value) when is_nil(value), do: "varbinary(1)" - - def encode_binary_descriptor(value) when byte_size(value) <= 0, - do: "varbinary(1)" - - def encode_binary_descriptor(value) when byte_size(value) > 0, - do: "varbinary(max)" - - # def encode_binary_descriptor(value) when byte_size(value) > 8_000, - # do: "varbinary(max)" - - # def encode_binary_descriptor(value), do: "varbinary(#{byte_size(value)})" - - @doc """ - Data encoding - """ - - # binary - def encode_data(@tds_data_type_bigvarbinary, value, attr) - when is_integer(value), - do: encode_data(@tds_data_type_bigvarbinary, <>, attr) - - def encode_data(@tds_data_type_bigvarbinary, nil, _), - do: <<@tds_plp_null::little-unsigned-64>> - - def encode_data(@tds_data_type_bigvarbinary, value, _) do - case byte_size(value) do - # varbinary(max) gets encoded in chunks - value_size when value_size > 8000 -> encode_plp(value) - value_size -> <> <> value - end - end - - # image - def encode_data(@tds_data_type_image, nil, _attr), - do: <<@tds_plp_null::little-unsigned-32>> - - def encode_data(@tds_data_type_image, value, _attr) do - image_size = byte_size(value) - <> <> value - end - - # string - def encode_data(@tds_data_type_nvarchar, nil, _), - do: <<@tds_plp_null::little-unsigned-64>> - - def encode_data(@tds_data_type_nvarchar, value, _) do - value = UCS2.from_string(value) - value_size = byte_size(value) - - cond do - value_size <= 0 -> - <<0x00::unsigned-64, 0x00::unsigned-32>> - - value_size > 8000 -> - encode_plp(value) - - true -> - <> <> value - end - end - - # integers - def encode_data(_, value, _) when is_integer(value) do - size = int_type_size(value) - <> <> <> - end - - def encode_data(@tds_data_type_intn, value, _) when value == nil do - <<0>> - end - - def encode_data(@tds_data_type_tinyint, value, _) when value == nil do - <<0>> - end - - # float - def encode_data(@tds_data_type_floatn, nil, _) do - <<0>> - end - - def encode_data(@tds_data_type_floatn, value, _) do - # d_ctx = Decimal.Context.get() - # d_ctx = %{d_ctx | precision: 38} - # Decimal.Context.set(d_ctx) - - # value_list = - # value - # |> Decimal.new() - # |> Decimal.abs() - # |> Decimal.to_string(:scientific) - # |> String.split(".") - - # precision = - # case value_list do - # [p, s] -> - # String.length(p) + String.length(s) - - # [p] -> - # String.length(p) - # end - - # if precision <= 7 + 1 do - # <<0x04, value::little-float-32>> - # else - # up to 15 digits of precision - # https://docs.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql - <<0x08, value::little-float-64>> - # end - end - - # decimal - def encode_data(@tds_data_type_decimaln, %Decimal{} = value, attr) do - set_decimal_precision(38) - precision = attr[:precision] - - d = - value - |> Decimal.to_string() - |> Decimal.new() - - sign = - case d.sign do - 1 -> 1 - -1 -> 0 - end - - value_binary = - value - |> Decimal.abs() - |> Decimal.to_string(:normal) - |> String.replace(".", "") - |> String.to_integer() - |> :binary.encode_unsigned(:little) - - value_size = byte_size(value_binary) - - len = - cond do - precision <= 9 -> 4 - precision <= 19 -> 8 - precision <= 28 -> 12 - precision <= 38 -> 16 - end - - padding = len - value_size - byte_len = len + 1 - value_binary = value_binary <> <<0::size(padding)-unit(8)>> - <> <> <> <> value_binary - end - - def encode_data(@tds_data_type_decimaln, nil, _), - # <<0, 0, 0, 0> - do: <<0x00::little-unsigned-32>> - - def encode_data(@tds_data_type_decimaln = data_type, value, attr) do - encode_data(data_type, Decimal.new(value), attr) - end - - # uuid - def encode_data(@tds_data_type_uniqueidentifier, value, _) do - if value != nil do - <<0x10>> <> encode_uuid(value) - else - <<0x00>> - end - end - - # datetime - def encode_data(@tds_data_type_daten, value, _attr) do - data = encode_date(value) - - if data == nil do - <<0x00>> - else - <<0x03, data::binary>> - end - end - - def encode_data(@tds_data_type_timen, value, _attr) do - # Logger.debug"encode_data_timen" - {data, scale} = encode_time(value) - # Logger.debug "#{inspect data}" - if data == nil do - <<0x00>> - else - len = - cond do - scale < 3 -> 0x03 - scale < 5 -> 0x04 - scale < 8 -> 0x05 - end - - <> - end - end - - def encode_data(@tds_data_type_datetimen, value, attr) do - # Logger.debug "dtn #{inspect attr}" - data = - case attr[:length] do - 4 -> - encode_smalldatetime(value) - - _ -> - encode_datetime(value) - end - - if data == nil do - <<0x00>> - else - <> <> data - end - end - - def encode_data(@tds_data_type_datetime2n, value, _attr) do - # Logger.debug "EncodeData #{inspect value}" - {data, scale} = encode_datetime2(value) - - if data == nil do - <<0x00>> - else - # 0x08 length of binary for scale 7 - storage_size = - cond do - scale < 3 -> 0x06 - scale < 5 -> 0x07 - scale < 8 -> 0x08 - end - - <> <> data - end - end - - def encode_data(@tds_data_type_datetimeoffsetn, value, _attr) do - # Logger.debug "encode_data_datetimeoffsetn #{inspect value}" - data = encode_datetimeoffset(value) - - if data == nil do - <<0x00>> - else - case value do - %DateTime{microsecond: {_, s}} when s < 3 -> - <<0x08, data::binary>> - - %DateTime{microsecond: {_, s}} when s < 5 -> - <<0x09, data::binary>> - - _ -> - <<0x0A, data::binary>> - end - end - end - - def encode_plp(data) do - size = byte_size(data) - - <> <> - encode_plp_chunk(size, data, <<>>) <> <<0x00::little-unsigned-32>> - end - - def encode_plp_chunk(0, _, buf), do: buf - - def encode_plp_chunk(size, data, buf) do - <<_t::unsigned-32, chunk_size::unsigned-32>> = <> - <> = data - plp = <> <> chunk - encode_plp_chunk(size - chunk_size, data, buf <> plp) - end - - defp int_type_size(int) when int == nil, do: 4 - defp int_type_size(int) when int in -254..255, do: 4 - defp int_type_size(int) when int in -32_768..32_767, do: 4 - defp int_type_size(int) when int in -2_147_483_648..2_147_483_647, do: 4 - - defp int_type_size(int) - when int in -9_223_372_036_854_775_808..9_223_372_036_854_775_807, - do: 8 - - defp int_type_size(int), - do: - raise( - ArgumentError, - "Erlang integer value #{int} is too big (more than 64bits) to fit tds" <> - " integer/bigint. Please consider using Decimal.new/1 to maintain precision." - ) - - # Date - def decode_date(<>) do - date = :calendar.gregorian_days_to_date(days + 366) - - if use_elixir_calendar_types?() do - Date.from_erl!(date, Calendar.ISO) - else - date - end - end - - def encode_date(nil), do: nil - - def encode_date(%Date{} = date), do: date |> Date.to_erl() |> encode_date() - - def encode_date(date) do - days = :calendar.date_to_gregorian_days(date) - 366 - <> - end - - # SmallDateTime - def decode_smalldatetime(<< - days::little-unsigned-16, - mins::little-unsigned-16 - >>) do - date = :calendar.gregorian_days_to_date(@year_1900_days + days) - hour = trunc(mins / 60) - min = trunc(mins - hour * 60) - - if use_elixir_calendar_types?() do - NaiveDateTime.from_erl!({date, {hour, min, 0}}) - else - {date, {hour, min, 0, 0}} - end - end - - def encode_smalldatetime(nil), do: nil - - def encode_smalldatetime({date, {hour, min, _}}), - do: encode_smalldatetime({date, {hour, min, 0, 0}}) - - def encode_smalldatetime({date, {hour, min, _, _}}) do - days = :calendar.date_to_gregorian_days(date) - @year_1900_days - mins = hour * 60 + min - encode_smalldatetime(days, mins) - end - - def encode_smalldatetime(days, mins) do - <> - end - - # DateTime - def decode_datetime(<< - days::little-signed-32, - secs300::little-unsigned-32 - >>) do - # Logger.debug "#{inspect {days, secs300}}" - date = :calendar.gregorian_days_to_date(@year_1900_days + days) - - milliseconds = round(secs300 * 10 / 3) - usec = rem(milliseconds, 1_000) - - seconds = div(milliseconds, 1_000) - - {_, {h, m, s}} = :calendar.seconds_to_daystime(seconds) - - if use_elixir_calendar_types?() do - NaiveDateTime.from_erl!( - {date, {h, m, s}}, - {usec * 1_000, 3}, - Calendar.ISO - ) - else - {date, {h, m, s, usec}} - end - end - - def encode_datetime(nil), do: nil - - def encode_datetime(%DateTime{} = dt), - do: encode_datetime(DateTime.to_naive(dt)) - - def encode_datetime(%NaiveDateTime{} = dt) do - {date, {h, m, s}} = NaiveDateTime.to_erl(dt) - {msec, _} = dt.microsecond - encode_datetime({date, {h, m, s, msec}}) - end - - def encode_datetime({date, {h, m, s}}), - do: encode_datetime({date, {h, m, s, 0}}) - - def encode_datetime({date, {h, m, s, us}}) do - days = :calendar.date_to_gregorian_days(date) - @year_1900_days - milliseconds = ((h * 60 + m) * 60 + s) * 1_000 + us / 1_000 - - secs_300 = round(milliseconds / (10 / 3)) - - {days, secs_300} = - if secs_300 == 25_920_000 do - {days + 1, 0} - else - {days, secs_300} - end - - <> - end - - # Time - def decode_time(scale, <>) do - # this is kind of rendudant, since "size" can be, and is, read from token - parsed_fsec = - cond do - scale in [0, 1, 2] -> - <> = fsec - parsed_fsec - - scale in [3, 4] -> - <> = fsec - parsed_fsec - - scale in [5, 6, 7] -> - <> = fsec - parsed_fsec - end - - fs_per_sec = trunc(:math.pow(10, scale)) - - hour = trunc(parsed_fsec / fs_per_sec / @secs_in_hour) - parsed_fsec = parsed_fsec - hour * @secs_in_hour * fs_per_sec - - min = trunc(parsed_fsec / fs_per_sec / @secs_in_min) - parsed_fsec = parsed_fsec - min * @secs_in_min * fs_per_sec - - sec = trunc(parsed_fsec / fs_per_sec) - - parsed_fsec = trunc(parsed_fsec - sec * fs_per_sec) - - if use_elixir_calendar_types?() do - {usec, scale} = - if scale > 6 do - {trunc(parsed_fsec / 10), 6} - else - {trunc(parsed_fsec * :math.pow(10, 6 - scale)), scale} - end - - Time.from_erl!({hour, min, sec}, {usec, scale}) - else - {hour, min, sec, parsed_fsec} - end - end - - # time(n) is represented as one unsigned integer that represents the number of - # 10-n second increments since 12 AM within a day. The length, in bytes, of - # that integer depends on the scale n as follows: - # 3 bytes if 0 <= n < = 2. - # 4 bytes if 3 <= n < = 4. - # 5 bytes if 5 <= n < = 7. - def encode_time(nil), do: {nil, 0} - - def encode_time({h, m, s}), do: encode_time({h, m, s, 0}) - - def encode_time(%Time{} = t) do - {h, m, s} = Time.to_erl(t) - {_, scale} = t.microsecond - # fix ms - fsec = microsecond_to_fsec(t.microsecond) - - encode_time({h, m, s, fsec}, scale) - end - - def encode_time(time), do: encode_time(time, @max_time_scale) - - def encode_time({h, m, s}, scale), do: encode_time({h, m, s, 0}, scale) - - def encode_time({hour, min, sec, fsec}, scale) do - # 10^scale fs in 1 sec - fs_per_sec = trunc(:math.pow(10, scale)) - - fsec = hour * 3600 * fs_per_sec + min * 60 * fs_per_sec + sec * fs_per_sec + fsec - - bin = - cond do - scale < 3 -> - <> - - scale < 5 -> - <> - - :else -> - <> - end - - {bin, scale} - end - - defp microsecond_to_fsec({us, 6}), - do: us - - defp microsecond_to_fsec({us, scale}), - do: trunc(us / :math.pow(10, 6 - scale)) - - # DateTime2 - def decode_datetime2(scale, <>) do - {time, date} = - cond do - scale in [0, 1, 2] -> - <> = data - {time, date} - - scale in [3, 4] -> - <> = data - {time, date} - - scale in [5, 6, 7] -> - <> = data - {time, date} - - true -> - raise "DateTime Scale Unknown" - end - - date = decode_date(date) - time = decode_time(scale, time) - - with true <- use_elixir_calendar_types?(), - {:ok, datetime2} <- NaiveDateTime.new(date, time) do - datetime2 - else - false -> {date, time} - {:error, error} -> raise DBConnection.EncodeError, error - end - end - - def encode_datetime2(value, scale \\ @max_time_scale) - def encode_datetime2(nil, _), do: {nil, 0} - - def encode_datetime2({date, time}, scale) do - {time, scale} = encode_time(time, scale) - date = encode_date(date) - {time <> date, scale} - end - - def encode_datetime2(%NaiveDateTime{} = value, _scale) do - t = NaiveDateTime.to_time(value) - {time, scale} = encode_time(t) - date = encode_date(NaiveDateTime.to_date(value)) - {time <> date, scale} - end - - def encode_datetime2(value, scale) do - raise ArgumentError, - "value #{inspect(value)} with scale #{inspect(scale)} is not supported DateTime2 value" - end - - # DateTimeOffset - def decode_datetimeoffset(scale, <>) do - {datetime, offset_min} = - cond do - scale in [0, 1, 2] -> - <> = data - {datetime, offset_min} - - scale in [3, 4] -> - <> = data - {datetime, offset_min} - - scale in [5, 6, 7] -> - <> = data - {datetime, offset_min} - - true -> - raise DBConnection.EncodeError, "DateTimeOffset Scale invalid" - end - - case decode_datetime2(scale, datetime) do - {date, time} -> - {date, time, offset_min} - - %NaiveDateTime{} = dt -> - offset = offset_min * 60 - - str = - dt - |> NaiveDateTime.add(offset) - |> NaiveDateTime.to_iso8601() - - sign = if offset_min >= 0, do: "+", else: "-" - - h = trunc(offset_min / 60) - - m = - Integer.to_string(offset_min - h * 60) - |> String.pad_leading(2, "0") - - h = - abs(h) - |> Integer.to_string() - |> String.pad_leading(2, "0") - - {:ok, datetime, ^offset} = DateTime.from_iso8601("#{str}#{sign}#{h}:#{m}") - - datetime - end - end - - def encode_datetimeoffset(datetimetz, scale \\ @max_time_scale) - def encode_datetimeoffset(nil, _), do: nil - - def encode_datetimeoffset({date, time, offset_min}, scale) do - {datetime, _ignore_always_10bytes} = encode_datetime2({date, time}, scale) - datetime <> <> - end - - def encode_datetimeoffset( - %DateTime{utc_offset: offset} = dt, - scale - ) do - {datetime, _} = - dt - |> DateTime.add(-offset) - |> DateTime.to_naive() - |> encode_datetime2(scale) - - offset_min = trunc(offset / 60) - - datetime <> <> - end - - def decode_schema_info(<<0x00, tail::binary>>) do - {nil, tail} - end - - def decode_schema_info(<<0x01, tail::binary>>) do - << - dblen::little-unsigned-8, - db::binary-size(dblen)-unit(16), - prefixlen::little-unsigned-8, - prefix::binary-size(prefixlen)-unit(16), - schemalen::little-unsigned-16, - schema::binary-size(schemalen)-unit(16), - rest::binary - >> = tail - - schema_info = %{ - db: UCS2.to_string(db), - prefix: UCS2.to_string(prefix), - schema: UCS2.to_string(schema) - } - - {schema_info, rest} - end - - def encode_datetime_type(%Parameter{}) do - # Logger.debug "encode_datetime_type" - type = @tds_data_type_datetimen - data = <> - {type, data, length: 8} - end - - def encode_smalldatetime_type(%Parameter{}) do - # Logger.debug "encode_smalldatetime_type" - type = @tds_data_type_datetimen - data = <> - {type, data, length: 4} - end - - def encode_date_type(%Parameter{}) do - type = @tds_data_type_daten - data = <> - {type, data, []} - end - - def encode_time_type(%Parameter{value: value}) do - # Logger.debug "encode_time_type" - type = @tds_data_type_timen - - case value do - nil -> - {type, <>, scale: 1} - - {_, _, _} -> - {type, <>, scale: 1} - - {_, _, _, fsec} -> - scale = Integer.digits(fsec) |> length() - {type, <>, scale: scale} - - %Time{microsecond: {_, scale}} -> - {type, <>, scale: scale} - - other -> - raise ArgumentError, "Value #{inspect(other)} is not valid time" - end - end - - def encode_datetime2_type(%Parameter{ - value: %NaiveDateTime{microsecond: {_, s}} - }) do - type = @tds_data_type_datetime2n - data = <> - {type, data, scale: s} - end - - def encode_datetime2_type(%Parameter{}) do - # Logger.debug "encode_datetime2_type" - type = @tds_data_type_datetime2n - data = <> - {type, data, scale: 7} - end - - def encode_datetimeoffset_type(%Parameter{ - value: %DateTime{microsecond: {_, s}} - }) do - type = @tds_data_type_datetimeoffsetn - data = <> - {type, data, scale: s} - end - - def encode_datetimeoffset_type(%Parameter{}) do - type = @tds_data_type_datetimeoffsetn - data = <> - {type, data, scale: 7} - end - - defp set_decimal_precision(precision) do - Decimal.Context.get() - |> Map.put(:precision, precision) - |> Decimal.Context.set() - end -end diff --git a/lib/tds/types/uuid.ex b/lib/tds/types/uuid.ex index d6cabd2..3db9acc 100644 --- a/lib/tds/types/uuid.ex +++ b/lib/tds/types/uuid.ex @@ -1,8 +1,13 @@ defmodule Tds.Types.UUID do @moduledoc """ - UUID data type + UUID data type. + + Deprecated: Use `Ecto.UUID` instead. The `Tds.Type.UUID` handler now + performs MSSQL mixed-endian byte reordering at the wire level, so + `Ecto.UUID` works directly without this module. """ + @deprecated "Use Ecto.UUID instead" @doc """ Casts to UUID. """ @@ -19,10 +24,13 @@ defmodule Tds.Types.UUID do casted -> {:ok, casted} end + @deprecated "Use Ecto.UUID instead" def cast(<>), do: encode(bin) + @deprecated "Use Ecto.UUID instead" def cast(_), do: :error + @deprecated "Use Ecto.UUID instead" @doc """ Same as `cast/1` but raises `Ecto.CastError` on invalid arguments. """ @@ -59,6 +67,7 @@ defmodule Tds.Types.UUID do defp c(?f), do: ?f defp c(_), do: throw(:error) + @deprecated "Use Ecto.UUID instead" @doc """ Converts a string representing a UUID into a binary. """ @@ -79,8 +88,10 @@ defmodule Tds.Types.UUID do end end + @deprecated "Use Ecto.UUID instead" def dump(_), do: :error + @deprecated "Use Ecto.UUID instead" def dump!(value) do case dump(value) do {:ok, binary} -> binary @@ -114,6 +125,7 @@ defmodule Tds.Types.UUID do defp d(?f), do: 15 defp d(_), do: throw(:error) + @deprecated "Use Ecto.UUID instead" @doc """ Converts a binary UUID into a string. """ @@ -127,8 +139,10 @@ defmodule Tds.Types.UUID do "Maybe you wanted to declare :uuid as your database field?" end + @deprecated "Use Ecto.UUID instead" def load(_), do: :error + @deprecated "Use Ecto.UUID instead" @doc """ Generates a version 4 (random) UUID. """ @@ -137,6 +151,7 @@ defmodule Tds.Types.UUID do uuid end + @deprecated "Use Ecto.UUID instead" @doc """ Generates a version 4 (random) UUID in the binary format. """ @@ -150,7 +165,7 @@ defmodule Tds.Types.UUID do e7::4, e8::4, e9::4, e10::4, e11::4, e12::4>> end - # Callback invoked by autogenerate fields. + @deprecated "Use Ecto.UUID instead" @doc false def autogenerate, do: generate() diff --git a/mix.exs b/mix.exs index 1f7fcfe..92b48cd 100644 --- a/mix.exs +++ b/mix.exs @@ -2,7 +2,7 @@ defmodule Tds.Mixfile do use Mix.Project @source_url "https://github.com/elixir-ecto/tds" - @version "2.3.7" + @version "3.0.0" def project do [ @@ -40,7 +40,9 @@ defmodule Tds.Mixfile do {:excoding, "~> 0.1", optional: true, only: :test}, {:tzdata, "~> 1.0", optional: true, only: :test}, {:table, "~> 0.1.0", optional: true}, - {:credo, "~> 1.7", only: [:dev, :test], runtime: false} + {:credo, "~> 1.7", only: [:dev, :test], runtime: false}, + {:stream_data, "~> 1.0", only: [:test, :dev], runtime: false}, + {:benchee, "~> 1.3", only: :dev, runtime: false} ] end diff --git a/mix.lock b/mix.lock index 9f81074..e284d3f 100644 --- a/mix.lock +++ b/mix.lock @@ -1,10 +1,12 @@ %{ + "benchee": {:hex, :benchee, "1.5.0", "4d812c31d54b0ec0167e91278e7de3f596324a78a096fd3d0bea68bb0c513b10", [:mix], [{:deep_merge, "~> 1.0", [hex: :deep_merge, repo: "hexpm", optional: false]}, {:statistex, "~> 1.1", [hex: :statistex, repo: "hexpm", optional: false]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: true]}], "hexpm", "5b075393aea81b8ae74eadd1c28b1d87e8a63696c649d8293db7c4df3eb67535"}, "bunt": {:hex, :bunt, "1.0.0", "081c2c665f086849e6d57900292b3a161727ab40431219529f13c4ddcf3e7a44", [:mix], [], "hexpm", "dc5f86aa08a5f6fa6b8096f0735c4e76d54ae5c9fa2c143e5a1fc7c1cd9bb6b5"}, "castore": {:hex, :castore, "1.0.14", "4582dd7d630b48cf5e1ca8d3d42494db51e406b7ba704e81fbd401866366896a", [:mix], [], "hexpm", "7bc1b65249d31701393edaaac18ec8398d8974d52c647b7904d01b964137b9f4"}, "certifi": {:hex, :certifi, "2.15.0", "0e6e882fcdaaa0a5a9f2b3db55b1394dba07e8d6d9bcad08318fb604c6839712", [:rebar3], [], "hexpm", "b147ed22ce71d72eafdad94f055165c1c182f61a2ff49df28bcc71d1d5b94a60"}, "credo": {:hex, :credo, "1.7.13", "126a0697df6b7b71cd18c81bc92335297839a806b6f62b61d417500d1070ff4e", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "47641e6d2bbff1e241e87695b29f617f1a8f912adea34296fb10ecc3d7e9e84f"}, "db_connection": {:hex, :db_connection, "2.7.0", "b99faa9291bb09892c7da373bb82cba59aefa9b36300f6145c5f201c7adf48ec", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "dcf08f31b2701f857dfc787fbad78223d61a32204f217f15e881dd93e4bdd3ff"}, "decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"}, + "deep_merge": {:hex, :deep_merge, "1.0.0", "b4aa1a0d1acac393bdf38b2291af38cb1d4a52806cf7a4906f718e1feb5ee961", [:mix], [], "hexpm", "ce708e5f094b9cd4e8f2be4f00d2f4250c4095be93f8cd6d018c753894885430"}, "earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"}, "ex_doc": {:hex, :ex_doc, "0.40.1", "67542e4b6dde74811cfd580e2c0149b78010fd13001fda7cfeb2b2c2ffb1344d", [:mix], [{:earmark_parser, "~> 1.4.44", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "bcef0e2d360d93ac19f01a85d58f91752d930c0a30e2681145feea6bd3516e00"}, "excoding": {:hex, :excoding, "0.1.5", "779aab7fef0dfe57f2b1d41c1820fd66483e2b2a3ccd96805f1656d513910051", [:mix], [{:rustler, ">= 0.0.0", [hex: :rustler, repo: "hexpm", optional: true]}, {:rustler_precompiled, "~> 0.5", [hex: :rustler_precompiled, repo: "hexpm", optional: false]}], "hexpm", "db66ee44caf37528887e380a900dc5e234bd57925fc9d91e8c4b0d1ad1700ae1"}, @@ -21,6 +23,8 @@ "parse_trans": {:hex, :parse_trans, "3.4.1", "6e6aa8167cb44cc8f39441d05193be6e6f4e7c2946cb2759f015f8c56b76e5ff", [:rebar3], [], "hexpm", "620a406ce75dada827b82e453c19cf06776be266f5a67cff34e1ef2cbb60e49a"}, "rustler_precompiled": {:hex, :rustler_precompiled, "0.8.2", "5f25cbe220a8fac3e7ad62e6f950fcdca5a5a5f8501835d2823e8c74bf4268d5", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: false]}, {:rustler, "~> 0.23", [hex: :rustler, repo: "hexpm", optional: true]}], "hexpm", "63d1bd5f8e23096d1ff851839923162096364bac8656a4a3c00d1fff8e83ee0a"}, "ssl_verify_fun": {:hex, :ssl_verify_fun, "1.1.7", "354c321cf377240c7b8716899e182ce4890c5938111a1296add3ec74cf1715df", [:make, :mix, :rebar3], [], "hexpm", "fe4c190e8f37401d30167c8c405eda19469f34577987c76dde613e838bbc67f8"}, + "statistex": {:hex, :statistex, "1.1.0", "7fec1eb2f580a0d2c1a05ed27396a084ab064a40cfc84246dbfb0c72a5c761e5", [:mix], [], "hexpm", "f5950ea26ad43246ba2cce54324ac394a4e7408fdcf98b8e230f503a0cba9cf5"}, + "stream_data": {:hex, :stream_data, "1.2.0", "58dd3f9e88afe27dc38bef26fce0c84a9e7a96772b2925c7b32cd2435697a52b", [:mix], [], "hexpm", "eb5c546ee3466920314643edf68943a5b14b32d1da9fe01698dc92b73f89a9ed"}, "table": {:hex, :table, "0.1.2", "87ad1125f5b70c5dea0307aa633194083eb5182ec537efc94e96af08937e14a8", [:mix], [], "hexpm", "7e99bc7efef806315c7e65640724bf165c3061cdc5d854060f74468367065029"}, "telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"}, "tzdata": {:hex, :tzdata, "1.1.3", "b1cef7bb6de1de90d4ddc25d33892b32830f907e7fc2fccd1e7e22778ab7dfbc", [:mix], [{:hackney, "~> 1.17", [hex: :hackney, repo: "hexpm", optional: false]}], "hexpm", "d4ca85575a064d29d4e94253ee95912edfb165938743dbf002acdf0dcecb0c28"}, diff --git a/test/datetime_test.exs b/test/datetime_test.exs index ffd39b6..0b285fc 100644 --- a/test/datetime_test.exs +++ b/test/datetime_test.exs @@ -4,7 +4,6 @@ defmodule DatetimeTest do use ExUnit.Case, async: true alias Tds.Parameter - alias Tds.Types @tag timeout: 50_000 @@ -39,46 +38,52 @@ defmodule DatetimeTest do [] ) - assert nil == Types.encode_datetime(nil) - - assert {@date, {15, 16, 23, 0}} == - @datetime - |> Types.encode_datetime() - |> Types.decode_datetime() - - assert {@date, {15, 16, 23, 123}} == - @datetime_us - |> Types.encode_datetime() - |> Types.decode_datetime() - assert [[nil]] == "SELECT CAST(NULL AS datetime)" |> query([]) - assert [[{{2014, 06, 20}, {10, 21, 42, 0}}]] == + assert [[~N[2014-06-20 10:21:42.000]]] == "SELECT CAST('20140620 10:21:42 AM' AS datetime)" |> query([]) assert [[nil]] == "SELECT @n1" - |> query([%Parameter{name: "@n1", value: nil, type: :datetime}]) + |> query([ + %Parameter{ + name: "@n1", + value: nil, + type: :datetime + } + ]) - assert [[{{2015, 4, 8}, {15, 16, 23, 0}}]] == + assert [[~N[2015-04-08 15:16:23.000]]] == "SELECT @n1" |> query([ - %Parameter{name: "@n1", value: @datetime, type: :datetime} + %Parameter{ + name: "@n1", + value: @datetime, + type: :datetime + } ]) - assert [[{{2015, 4, 8}, {15, 16, 23, 123}}]] == + assert [[~N[2015-04-08 15:16:23.123]]] == "SELECT @n1" |> query([ - %Parameter{name: "@n1", value: @datetime_us, type: :datetime} + %Parameter{ + name: "@n1", + value: @datetime_us, + type: :datetime + } ]) assert :ok = "INSERT INTO date_test VALUES (@1, @2)" |> query([ - %Parameter{name: "@1", value: nil, type: :datetime}, + %Parameter{ + name: "@1", + value: nil, + type: :datetime + }, %Parameter{name: "@2", value: 0, type: :integer} ]) @@ -86,34 +91,35 @@ defmodule DatetimeTest do end test "smalldatetime", context do - assert nil == Types.encode_smalldatetime(nil) - - assert {@date, {15, 16, 0, 0}} == - @datetime - |> Types.encode_smalldatetime() - |> Types.decode_smalldatetime() - assert [[nil]] == "SELECT CAST(NULL AS smalldatetime)" |> query([]) - assert [[{{2014, 06, 20}, {10, 40, 0, 0}}]] == + assert [[~N[2014-06-20 10:40:00]]] == "SELECT CAST('20140620 10:40 AM' AS smalldatetime)" |> query([]) assert [[nil]] == "SELECT @n1" |> query([ - %Parameter{name: "@n1", value: nil, type: :smalldatetime} + %Parameter{ + name: "@n1", + value: nil, + type: :smalldatetime + } ]) - assert [[{{2015, 4, 8}, {15, 16, 0, 0}}]] == + assert [[~N[2015-04-08 15:16:00]]] == "SELECT @n1" |> query([ - %Parameter{name: "@n1", value: @datetime, type: :smalldatetime} + %Parameter{ + name: "@n1", + value: @datetime, + type: :smalldatetime + } ]) - assert [[{{2015, 4, 8}, {15, 16, 0, 0}}]] == + assert [[~N[2015-04-08 15:16:00]]] == "SELECT @n1" |> query([ %Parameter{ @@ -125,146 +131,203 @@ defmodule DatetimeTest do end test "date", context do - assert nil == Types.encode_date(nil) - enc = Types.encode_date(@date) - assert @date == Types.decode_date(enc) - assert [[nil]] == query("SELECT CAST(NULL AS date)", []) - assert [[{2014, 06, 20}]] == query("SELECT CAST('20140620' AS date)", []) + + assert [[~D[2014-06-20]]] == + query("SELECT CAST('20140620' AS date)", []) assert [[nil]] == query("SELECT @n1", [ %Parameter{name: "@n1", value: nil, type: :date} ]) - assert [[{2015, 4, 8}]] == + assert [[~D[2015-04-08]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @date, type: :date} + %Parameter{ + name: "@n1", + value: @date, + type: :date + } ]) end test "time", context do - assert {nil, 0} == Types.encode_time(nil) - - {bin, scale} = Types.encode_time(@time) - assert {15, 16, 23, 0} == Types.decode_time(scale, bin) - - {bin, scale} = Types.encode_time(@time_fsec, 7) - assert {15, 16, 23, 1_234_567} == Types.decode_time(scale, bin) - - {bin, scale} = Types.encode_time(@time_us, 6) - assert {15, 16, 23, 123_456} == Types.decode_time(scale, bin) - assert [[nil]] == query("SELECT CAST(NULL AS time)", []) assert [[nil]] == query("SELECT CAST(NULL AS time(0))", []) assert [[nil]] == query("SELECT CAST(NULL AS time(6))", []) - assert [[{10, 24, 30, 1_234_567}]] == - query("SELECT CAST('10:24:30.1234567' AS time)", []) - - assert [[{10, 24, 30, 0}]] == - query("SELECT CAST('10:24:30.1234567' AS time(0))", []) - - assert [[{10, 24, 30, 1_234_567}]] == - query("SELECT CAST('10:24:30.1234567' AS time(7))", []) - - assert [[{10, 24, 30, 123_457}]] == - query("SELECT CAST('10:24:30.1234567' AS time(6))", []) - - assert [[{10, 24, 30, 1}]] == - query("SELECT CAST('10:24:30.1234567' AS time(1))", []) + # Scale 7 -> clipped to microsecond (6 digits) + assert [[~T[10:24:30.123456]]] == + query( + "SELECT CAST('10:24:30.1234567' AS time)", + [] + ) + + assert [[~T[10:24:30]]] == + query( + "SELECT CAST('10:24:30.1234567' AS time(0))", + [] + ) + + # Scale 7 -> same clipping + assert [[~T[10:24:30.123456]]] == + query( + "SELECT CAST('10:24:30.1234567' AS time(7))", + [] + ) + + assert [[~T[10:24:30.123457]]] == + query( + "SELECT CAST('10:24:30.1234567' AS time(6))", + [] + ) + + assert [[~T[10:24:30.1]]] == + query( + "SELECT CAST('10:24:30.1234567' AS time(1))", + [] + ) assert [[nil]] == query("SELECT @n1", [ %Parameter{name: "@n1", value: nil, type: :time} ]) - assert [[{15, 16, 23, 0}]] == + # Old encode sends @time as scale 7 -> decode returns scale 6 + assert [[~T[15:16:23.000000]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @time, type: :time} + %Parameter{ + name: "@n1", + value: @time, + type: :time + } ]) - assert [[{15, 16, 23, 123}]] == + # {15,16,23,123} at scale 7 -> 123 * 100ns = 12.3us = 12us + assert [[~T[15:16:23.000012]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: {15, 16, 23, 123}, type: :time} + %Parameter{ + name: "@n1", + value: {15, 16, 23, 123}, + type: :time + } ]) - assert [[{15, 16, 23, 1_234_567}]] == + # @time_fsec = {15,16,23,1_234_567} at scale 7 + # 1234567 * 100ns = 123456.7us = 123456us + assert [[~T[15:16:23.123456]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @time_fsec, type: :time} + %Parameter{ + name: "@n1", + value: @time_fsec, + type: :time + } ]) end test "datetime2", context do - assert {nil, 0} == Types.encode_datetime2(nil) - - {dt, scale} = Types.encode_datetime2(@datetime) - assert {@date, {15, 16, 23, 0}} == Types.decode_datetime2(scale, dt) - - {dt, scale} = Types.encode_datetime2(@datetime_fsec) - assert @datetime_fsec == Types.decode_datetime2(scale, dt) - - {dt, scale} = Types.encode_datetime2({@date, {131, 56, 23, 0}}, 0) - assert {@date, {131, 56, 23, 0}} == Types.decode_datetime2(scale, dt) - - assert [[nil]] == query("SELECT CAST(NULL AS datetime2)", []) - assert [[nil]] == query("SELECT CAST(NULL AS datetime2(0))", []) - assert [[nil]] == query("SELECT CAST(NULL AS datetime2(6))", []) - - assert [[{{2015, 4, 8}, {15, 16, 23, 0}}]] == - query("SELECT CAST('20150408 15:16:23' AS datetime2)", []) - - assert [[{{2015, 4, 8}, {15, 16, 23, 4_200_000}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2)", []) - - assert [[{{2015, 4, 8}, {15, 16, 23, 4_200_000}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2(7))", []) - - assert [[{{2015, 4, 8}, {15, 16, 23, 420_000}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2(6))", []) + assert [[nil]] == + query("SELECT CAST(NULL AS datetime2)", []) - assert [[{{2015, 4, 8}, {15, 16, 23, 0}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2(0))", []) + assert [[nil]] == + query("SELECT CAST(NULL AS datetime2(0))", []) - assert [[{{2015, 4, 8}, {15, 16, 23, 4}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2(1))", []) + assert [[nil]] == + query("SELECT CAST(NULL AS datetime2(6))", []) + + # Scale 7 -> microsecond clipping + assert [[~N[2015-04-08 15:16:23.000000]]] == + query( + "SELECT CAST('20150408 15:16:23' AS datetime2)", + [] + ) + + assert [[~N[2015-04-08 15:16:23.420000]]] == + query( + "SELECT CAST('20150408 15:16:23.42' AS datetime2)", + [] + ) + + assert [[~N[2015-04-08 15:16:23.420000]]] == + query( + "SELECT CAST('20150408 15:16:23.42' AS datetime2(7))", + [] + ) + + assert [[~N[2015-04-08 15:16:23.420000]]] == + query( + "SELECT CAST('20150408 15:16:23.42' AS datetime2(6))", + [] + ) + + assert [[~N[2015-04-08 15:16:23]]] == + query( + "SELECT CAST('20150408 15:16:23.42' AS datetime2(0))", + [] + ) + + assert [[~N[2015-04-08 15:16:23.4]]] == + query( + "SELECT CAST('20150408 15:16:23.42' AS datetime2(1))", + [] + ) assert [[nil]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: nil, type: :datetime2} + %Parameter{ + name: "@n1", + value: nil, + type: :datetime2 + } ]) - assert [[{{2015, 4, 8}, {15, 16, 23, 0}}]] == + assert [[~N[2015-04-08 15:16:23.000000]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @datetime, type: :datetime2} + %Parameter{ + name: "@n1", + value: @datetime, + type: :datetime2 + } ]) - assert [[{{2015, 4, 8}, {15, 16, 23, 1_234_567}}]] == + # Scale 7 -> clipped to 6 + assert [[~N[2015-04-08 15:16:23.123456]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @datetime_fsec, type: :datetime2} + %Parameter{ + name: "@n1", + value: @datetime_fsec, + type: :datetime2 + } ]) end test "implicit params", context do - assert [[{{2015, 4, 8}, {15, 16, 23, 0}}]] == - query("SELECT @n1", [%Parameter{name: "@n1", value: @datetime}]) - - # #datetime_us {_,_,_,}, {_,_,_,_} - assert [[{{2015, 4, 8}, {15, 16, 23, 123_456}}]] == + # datetime via old encode -> NaiveDateTime decode + assert [[~N[2015-04-08 15:16:23.000]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @datetime_us} + %Parameter{name: "@n1", value: @datetime} ]) - # datetime_fsec {_,_,_,}, {_,_,_}, _ - assert [[{{2015, 4, 8}, {15, 16, 23, 0}, -240}]] == + # @datetime_us = {{2015,4,8},{15,16,23,123_456}} at scale 7 + # 123456 * 100ns = 12345.6us = 12345us -> ~N[...012345] + assert [[~N[2015-04-08 15:16:23.012345]]] == query("SELECT @n1", [ - %Parameter{name: "@n1", value: @datetimeoffset} + %Parameter{name: "@n1", value: @datetime_us} ]) - # datetime_fsec {_,_,_,}, {_,_,_,_}, _ - assert [[{{2015, 4, 8}, {15, 16, 23, 1_234_567}, -240}]] == - query("SELECT @n1", [ - %Parameter{name: "@n1", value: @datetimeoffset_fsec} - ]) + # datetimeoffset returns DateTime struct + [[result]] = + query("SELECT @n1", [ + %Parameter{name: "@n1", value: @datetimeoffset} + ]) + + assert %DateTime{} = result + + [[result_fsec]] = + query("SELECT @n1", [ + %Parameter{name: "@n1", value: @datetimeoffset_fsec} + ]) + + assert %DateTime{} = result_fsec end end diff --git a/test/datetimeoffset_test.exs b/test/datetimeoffset_test.exs index baeb14c..190b180 100644 --- a/test/datetimeoffset_test.exs +++ b/test/datetimeoffset_test.exs @@ -4,7 +4,6 @@ defmodule DatetimeOffsetTest do use ExUnit.Case, async: true alias Tds.Parameter - alias Tds.Types @tag timeout: 50_000 @@ -22,106 +21,106 @@ defmodule DatetimeOffsetTest do @datetimeoffset_fsec {@date, @time_fsec, @offset} test "datetimeoffset", context do - dts = [ - {{2020, 2, 28}, {13, 59, 59, 0}, 600}, - {{2020, 2, 28}, {13, 59, 59, 0}, 600}, - {{2020, 2, 28}, {13, 59, 59, 1}, 600}, - {{2020, 2, 28}, {13, 59, 59, 12}, 600}, - {{2020, 2, 28}, {13, 59, 59, 123}, 600}, - {{2020, 2, 28}, {13, 59, 59, 1234}, 600}, - {{2020, 2, 28}, {13, 59, 59, 12_345}, 600}, - {{2020, 2, 28}, {13, 59, 59, 123_456}, 600}, - {{2020, 2, 28}, {13, 59, 59, 1_234_567}, 600}, - {{2020, 2, 28}, {13, 59, 59, 0}, -600}, - {{2020, 2, 28}, {13, 59, 59, 0}, -600}, - {{2020, 2, 28}, {13, 59, 59, 1}, -600}, - {{2020, 2, 28}, {13, 59, 59, 12}, -600}, - {{2020, 2, 28}, {13, 59, 59, 123}, -600}, - {{2020, 2, 28}, {13, 59, 59, 1234}, -600}, - {{2020, 2, 28}, {13, 59, 59, 12_345}, -600}, - {{2020, 2, 28}, {13, 59, 59, 123_456}, -600}, - {{2020, 2, 28}, {13, 59, 59, 1_234_567}, -600}, - {{2020, 2, 28}, {13, 59, 59, 1_234_567}, 0}, - {{2020, 2, 28}, {13, 59, 59, 1_234_567}, 0} - ] - - strs = [ - "'2020-02-28 23:59:59 +10:00'", - "'2020-02-28 23:59:59.0000000 +10:00'", - "'2020-02-28 23:59:59.0000001 +10:00'", - "'2020-02-28 23:59:59.0000012 +10:00'", - "'2020-02-28 23:59:59.0000123 +10:00'", - "'2020-02-28 23:59:59.0001234 +10:00'", - "'2020-02-28 23:59:59.0012345 +10:00'", - "'2020-02-28 23:59:59.0123456 +10:00'", - "'2020-02-28 23:59:59.1234567 +10:00'", - "'2020-02-28 03:59:59 -10:00'", - "'2020-02-28 03:59:59.0000000 -10:00'", - "'2020-02-28 03:59:59.0000001 -10:00'", - "'2020-02-28 03:59:59.0000012 -10:00'", - "'2020-02-28 03:59:59.0000123 -10:00'", - "'2020-02-28 03:59:59.0001234 -10:00'", - "'2020-02-28 03:59:59.0012345 -10:00'", - "'2020-02-28 03:59:59.0123456 -10:00'", - "'2020-02-28 03:59:59.1234567 -10:00'", - "'2020-02-28 13:59:59.1234567 +00:00'", - "'2020-02-28 13:59:59.1234567Z'" - ] - - Enum.zip(dts, strs) - |> Enum.each(fn {dt, str} -> - assert [[^dt]] = query("SELECT CAST(#{str} AS datetimeoffset(7))", []) - end) - - assert nil == Types.encode_datetimeoffset(nil) + assert [[nil]] == + query("SELECT CAST(NULL AS datetimeoffset)", []) - assert [[nil]] == query("SELECT CAST(NULL AS datetimeoffset)", []) - assert [[nil]] == query("SELECT CAST(NULL AS datetimeoffset(0))", []) - assert [[nil]] == query("SELECT CAST(NULL AS datetimeoffset(6))", []) + assert [[nil]] == + query("SELECT CAST(NULL AS datetimeoffset(0))", []) - assert [[{{2015, 4, 8}, {15, 16, 23, 4_200_000}}]] == - query("SELECT CAST('20150408 15:16:23.42' AS datetime2)", []) + assert [[nil]] == + query("SELECT CAST(NULL AS datetimeoffset(6))", []) - assert [[{{2015, 4, 8}, {15, 16, 23, 4_200_000}, 0}]] == + # New handler returns DateTime structs + # datetime2 returns NaiveDateTime + assert [[~N[2015-04-08 15:16:23.420000]]] == query( - "SELECT CAST('2015-4-8 15:16:23.42 +0:00' as datetimeoffset(7))", + "SELECT CAST('20150408 15:16:23.42' AS datetime2)", [] ) - assert [[{{2015, 4, 8}, {7, 1, 23, 4_200_000}, 495}]] == - query( - "SELECT CAST('2015-4-8 15:16:23.42 +8:15' as datetimeoffset(7))", - [] - ) + # datetimeoffset with +0:00 offset -> decoded as UTC + [[dto_zero]] = + query( + "SELECT CAST('2015-4-8 15:16:23.42 +0:00' as datetimeoffset(7))", + [] + ) - assert [[{{2015, 4, 8}, {23, 31, 23, 4_200_000}, -495}]] == - query( - "SELECT CAST('2015-4-8 15:16:23.42 -8:15' as datetimeoffset(7))", - [] - ) + assert %DateTime{} = dto_zero + assert dto_zero.utc_offset == 0 - assert [[nil]] == - query("SELECT @n1", [ - %Parameter{name: "@n1", value: nil, type: :datetimeoffset} - ]) + # datetimeoffset with +8:15 offset -> decoded as UTC + [[dto_plus]] = + query( + "SELECT CAST('2015-4-8 15:16:23.42 +8:15' as datetimeoffset(7))", + [] + ) - assert [[{{2015, 4, 8}, {15, 16, 23, 1_234_567}, -240}]] == - query("SELECT @n1", [ - %Parameter{ - name: "@n1", - value: @datetimeoffset_fsec, - type: :datetimeoffset - } - ]) + assert %DateTime{} = dto_plus + assert dto_plus.utc_offset == 0 + + # datetimeoffset with -8:15 offset -> decoded as UTC + [[dto_minus]] = + query( + "SELECT CAST('2015-4-8 15:16:23.42 -8:15' as datetimeoffset(7))", + [] + ) - assert [[{{2015, 4, 8}, {15, 16, 23, 0}, -240}]] == + assert %DateTime{} = dto_minus + assert dto_minus.utc_offset == 0 + + assert [[nil]] == query("SELECT @n1", [ %Parameter{ name: "@n1", - value: @datetimeoffset, + value: nil, type: :datetimeoffset } ]) + + [[dto_fsec]] = + query("SELECT @n1", [ + %Parameter{ + name: "@n1", + value: @datetimeoffset_fsec, + type: :datetimeoffset + } + ]) + + assert %DateTime{} = dto_fsec + # Decode returns UTC (offset discarded on decode) + assert dto_fsec.utc_offset == 0 + + [[dto_base]] = + query("SELECT @n1", [ + %Parameter{ + name: "@n1", + value: @datetimeoffset, + type: :datetimeoffset + } + ]) + + assert %DateTime{} = dto_base + assert dto_base.utc_offset == 0 + + # Verify various scales decode to DateTime structs + dts_strs = [ + "'2020-02-28 23:59:59 +10:00'", + "'2020-02-28 23:59:59.0000000 +10:00'", + "'2020-02-28 23:59:59.0000001 +10:00'", + "'2020-02-28 03:59:59 -10:00'", + "'2020-02-28 13:59:59.1234567 +00:00'", + "'2020-02-28 13:59:59.1234567Z'" + ] + + for str <- dts_strs do + [[result]] = + query( + "SELECT CAST(#{str} AS datetimeoffset(7))", + [] + ) + + assert %DateTime{} = result + end end test "database", context do @@ -148,14 +147,46 @@ defmodule DatetimeOffsetTest do assert :ok = "INSERT INTO datetimeoffset_test VALUES (@1, @2, @3, @4, @5, @6, @7, @8, @9)" |> query([ - %Parameter{name: "@1", value: nil, type: :datetimeoffset}, - %Parameter{name: "@2", value: nil, type: :datetimeoffset}, - %Parameter{name: "@3", value: nil, type: :datetimeoffset}, - %Parameter{name: "@4", value: nil, type: :datetimeoffset}, - %Parameter{name: "@5", value: nil, type: :datetimeoffset}, - %Parameter{name: "@6", value: nil, type: :datetimeoffset}, - %Parameter{name: "@7", value: nil, type: :datetimeoffset}, - %Parameter{name: "@8", value: nil, type: :datetimeoffset}, + %Parameter{ + name: "@1", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@2", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@3", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@4", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@5", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@6", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@7", + value: nil, + type: :datetimeoffset + }, + %Parameter{ + name: "@8", + value: nil, + type: :datetimeoffset + }, %Parameter{name: "@9", value: 0, type: :integer} ]) @@ -179,21 +210,18 @@ defmodule DatetimeOffsetTest do %Parameter{name: "@9", value: 1, type: :integer} ]) - assert [ - [ - {{2015, 4, 8}, {15, 16, 23, 0}, -240}, - {{2015, 4, 8}, {15, 16, 23, 1}, -240}, - {{2015, 4, 8}, {15, 16, 23, 12}, -240}, - {{2015, 4, 8}, {15, 16, 23, 123}, -240}, - {{2015, 4, 8}, {15, 16, 23, 1235}, -240}, - {{2015, 4, 8}, {15, 16, 23, 12_346}, -240}, - {{2015, 4, 8}, {15, 16, 23, 123_457}, -240}, - {{2015, 4, 8}, {15, 16, 23, 1_234_567}, -240} - ] - ] == - query( - "SELECT zero, one, two, three, four, five, six, seven from datetimeoffset_test WHERE ver = 1" - ) + # All columns return DateTime structs with offset + [[z, o, tw, th, fo, fi, si, se]] = + query( + "SELECT zero, one, two, three, four, five, six, seven " <> + "from datetimeoffset_test WHERE ver = 1" + ) + + for dto <- [z, o, tw, th, fo, fi, si, se] do + assert %DateTime{} = dto + # Decode always returns UTC + assert dto.utc_offset == 0 + end p = %Parameter{ name: "@1", @@ -215,22 +243,16 @@ defmodule DatetimeOffsetTest do %Parameter{name: "@9", value: 2, type: :integer} ]) - # datetimeoffset with higher precision is rounded on insertion - assert [ - [ - {{2015, 4, 8}, {15, 16, 23, 0}, 0}, - {{2015, 4, 8}, {15, 16, 23, 1}, 0}, - {{2015, 4, 8}, {15, 16, 23, 12}, 0}, - {{2015, 4, 8}, {15, 16, 23, 123}, 0}, - {{2015, 4, 8}, {15, 16, 23, 1235}, 0}, - {{2015, 4, 8}, {15, 16, 23, 12_346}, 0}, - {{2015, 4, 8}, {15, 16, 23, 123_456}, 0}, - {{2015, 4, 8}, {15, 16, 23, 1_234_560}, 0} - ] - ] == - query( - "SELECT zero, one, two, three, four, five, six, seven from datetimeoffset_test WHERE ver = 2" - ) + [[z2, o2, tw2, th2, fo2, fi2, si2, se2]] = + query( + "SELECT zero, one, two, three, four, five, six, seven " <> + "from datetimeoffset_test WHERE ver = 2" + ) + + for dto <- [z2, o2, tw2, th2, fo2, fi2, si2, se2] do + assert %DateTime{} = dto + assert dto.utc_offset == 0 + end query("DROP TABLE datetimeoffset_test", []) end diff --git a/test/elixir_calendar_test.exs b/test/elixir_calendar_test.exs index 05347ea..a0dc1f0 100644 --- a/test/elixir_calendar_test.exs +++ b/test/elixir_calendar_test.exs @@ -4,7 +4,6 @@ defmodule ElixirCalendarTest do use ExUnit.Case, async: true alias Tds.Parameter, as: P - alias Tds.Types setup do Calendar.put_time_zone_database(Tzdata.TimeZoneDatabase) @@ -40,8 +39,6 @@ defmodule ElixirCalendarTest do ] Enum.each(times, fn t -> - {time, scale} = Types.encode_time(t) - assert t == Types.decode_time(scale, time) assert [[^t]] = query("SELECT @1", [%P{name: "@1", value: t}]) end) end @@ -64,11 +61,9 @@ defmodule ElixirCalendarTest do # AD dates are not supported yet since `:calendar.date_to_georgian_days` do not # support negative years date = ~D[0002-02-28] - assert date == Types.encode_date(date) |> Types.decode_date() assert [[date]] == query("select @1", [%P{name: "@1", value: date}]) date = ~D[2020-02-28] - assert date == Types.encode_date(date) |> Types.decode_date() assert [[date]] == query("select @1", [%P{name: "@1", value: date}]) end @@ -97,9 +92,6 @@ defmodule ElixirCalendarTest do ] Enum.each(datetimes, fn {dt_in, dt_out} -> - token = Types.encode_datetime(dt_in) - assert dt_out == Types.decode_datetime(token) - assert [[^dt_out]] = query("SELECT @1", [ %P{name: "@1", value: dt_in, type: :datetime} @@ -137,8 +129,6 @@ defmodule ElixirCalendarTest do ] Enum.each(datetime2s, fn %{value: dt} = p -> - {token, scale} = Types.encode_datetime2(dt) - assert dt == Types.decode_datetime2(scale, token) assert [[^dt]] = query("SELECT @1", [p]) end) end @@ -182,9 +172,7 @@ defmodule ElixirCalendarTest do %P{name: "@1", value: ~U[2020-02-28 23:59:59.999999Z], type: type} ] - Enum.each(dts, fn %{value: %{microsecond: {_, s}} = dt} = p -> - token = Types.encode_datetimeoffset(dt, s) - assert dt == Types.decode_datetimeoffset(s, token) + Enum.each(dts, fn %{value: dt} = p -> assert [[^dt]] = query("SELECT @1 ", [p]) end) end @@ -233,10 +221,8 @@ defmodule ElixirCalendarTest do %P{name: "@1", value: tzb(~U[2020-02-28 13:59:59.999999Z]), type: type} ] - Enum.each(dts, fn %{value: %{microsecond: {_, s}} = dt} = p -> - token = Types.encode_datetimeoffset(dt, s) + Enum.each(dts, fn %{value: dt} = p -> {:ok, utc_dt} = DateTime.shift_zone(dt, "Etc/UTC") - assert utc_dt == Types.decode_datetimeoffset(s, token) assert [[^utc_dt]] = query("SELECT @1 ", [p]) end) end diff --git a/test/login7_test.exs b/test/login7_test.exs index 9dbd44f..45e0a43 100644 --- a/test/login7_test.exs +++ b/test/login7_test.exs @@ -24,19 +24,24 @@ defmodule Login7Test do username: "test" } - assert Login7.encode(login) == - [ - <<16, 1, 0, 228, 0, 0, 1, 0, 220, 0, 0, 0, 4, 0, 0, 116, 0, 16, 0, 0, 4, 0, 0, 7, - 0, 0, 3, 34, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 4, 0, 0, 94, 0, 13, 0, 120, - 0, 4, 0, 128, 0, 8, 0, 144, 0, 10, 0, 164, 0, 13, 0, 0, 0, 0, 0, 190, 0, 4, 0, 0, - 0, 0, 0, 198, 0, 11, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 116, 0, 101, 0, 115, 0, 116, 0, 46, 0, 104, 0, 111, 0, 115, 0, 116, 0, - 46, 0, 99, 0, 111, 0, 109, 0, 116, 0, 101, 0, 115, 0, 116, 0, 162, 165, 179, 165, - 146, 165, 146, 165, 210, 165, 83, 165, 130, 165, 227, 165, 69, 0, 108, 0, 105, 0, - 120, 0, 105, 0, 114, 0, 32, 0, 84, 0, 68, 0, 83, 0, 115, 0, 111, 0, 109, 0, 101, - 0, 46, 0, 104, 0, 111, 0, 115, 0, 116, 0, 46, 0, 99, 0, 111, 0, 109, 0, 79, 0, - 68, 0, 66, 0, 67, 0, 109, 0, 121, 0, 95, 0, 100, 0, 97, 0, 116, 0, 97, 0, 98, 0, - 97, 0, 115, 0, 101, 0>> - ] + expected = + <<16, 1, 0, 228, 0, 0, 1, 0, 220, 0, 0, 0, 4, 0, 0, 116, 0, 16, 0, 0, 4, 0, 0, 7, 0, 0, 3, + 34, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 4, 0, 0, 94, 0, 13, 0, 120, 0, 4, 0, 128, 0, 8, + 0, 144, 0, 10, 0, 164, 0, 13, 0, 0, 0, 0, 0, 190, 0, 4, 0, 0, 0, 0, 0, 198, 0, 11, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 116, 0, 101, 0, 115, 0, + 116, 0, 46, 0, 104, 0, 111, 0, 115, 0, 116, 0, 46, 0, 99, 0, 111, 0, 109, 0, 116, 0, 101, + 0, 115, 0, 116, 0, 162, 165, 179, 165, 146, 165, 146, 165, 210, 165, 83, 165, 130, 165, + 227, 165, 69, 0, 108, 0, 105, 0, 120, 0, 105, 0, 114, 0, 32, 0, 84, 0, 68, 0, 83, 0, 115, + 0, 111, 0, 109, 0, 101, 0, 46, 0, 104, 0, 111, 0, 115, 0, 116, 0, 46, 0, 99, 0, 111, 0, + 109, 0, 79, 0, 68, 0, 66, 0, 67, 0, 109, 0, 121, 0, 95, 0, 100, 0, 97, 0, 116, 0, 97, 0, + 98, 0, 97, 0, 115, 0, 101, 0>> + + result = + login + |> Login7.encode() + |> Enum.map(&IO.iodata_to_binary/1) + |> IO.iodata_to_binary() + + assert result == expected end end diff --git a/test/messages_test.exs b/test/messages_test.exs deleted file mode 100644 index d583af2..0000000 --- a/test/messages_test.exs +++ /dev/null @@ -1,132 +0,0 @@ -defmodule MessagesTest do - use ExUnit.Case, async: true - - alias Tds.Messages - - describe "encode_packets" do - test "data length < 4088 is encoded into one packet" do - assert [ - << - # type - 0x10, - # status - 0x1, - # length - 0x0, - 0x9, - # channel - 0x0, - 0x0, - # packet number - 0x1, - # window - 0x0, - # data - 0xFF - >> - ] == Messages.encode_packets(0x10, <<0xFF>>) - end - - test "data length == 4087 is encoded into one packet" do - data = :binary.copy(<<0xFF>>, 4087) - - assert [ - << - # type - 0x10, - # status - 0x1, - # length - 0x0F, - 0xFF, - # channel - 0x0, - 0x0, - # packet number - 0x1, - # window - 0x0, - data::binary - >> - ] == Messages.encode_packets(0x10, data) - end - - test "data length == 4088 is encoded into one packet" do - data = :binary.copy(<<0xFF>>, 4088) - - assert [ - [ - << - # type - 0x10, - # status - 0x1, - # length - 0x10, - 0x00, - # channel - 0x0, - 0x0, - # packet number - 0x1, - # window - 0x0 - >>, - << - # data - data::binary - >> - ] - ] == Messages.encode_packets(0x10, data) - end - - test "data length == 4089 is encoded into two packets " do - part1 = :binary.copy(<<0xFF>>, 4088) - part2 = :binary.copy(<<0xFF>>, 1) - data = part1 <> part2 - - assert [ - [ - << - # type - 0x10, - # status - 0x0, - # length - 0x10, - 0x00, - # channel - 0x0, - 0x0, - # packet number - 0x1, - # window - 0x0 - >>, - << - # data - part1::binary - >> - ], - << - # type - 0x10, - # status - 0x1, - # length - 0x0, - 0x9, - # channel - 0x0, - 0x0, - # packet number - 0x2, - # window - 0x0, - # data - 0xFF - >> - ] == Messages.encode_packets(0x10, data) - end - end -end diff --git a/test/packet/token_stream_test.exs b/test/packet/token_stream_test.exs index 9229bdd..2e1dd54 100644 --- a/test/packet/token_stream_test.exs +++ b/test/packet/token_stream_test.exs @@ -479,45 +479,43 @@ defmodule Packet.TokenStreamTest do 0x00 >> - @token_stream [ - colmetadata: [ - %{ - collation: %Tds.Protocol.Collation{ - codepage: "WINDOWS-1252", - col_flags: 0, - lcid: 36_941, - sort_id: 52, - version: 0 - }, - data_reader: :shortlen, - data_type: :variable, - data_type_code: 167, - length: 3, - name: "bar", - sql_type: :bigvarchar - } - ], - row: ["foo"], - done: %{ - cmd: 193, - rows: 1, - status: %{ - atnn?: false, - count?: true, - error?: false, - final?: true, - inxact?: false, - more?: false, - rpc_in_batch?: false, - srverror?: false - } - } - ] - @tag capture_log: true test "should decode SqlBatch Server Response" do <<_::binary-8, package_data::binary>> = @package_data - assert @token_stream == Tds.Tokens.decode_tokens(package_data, nil) + tokens = Tds.Tokens.decode_tokens(package_data, nil) + + assert [ + colmetadata: [col_meta], + row: ["foo"], + done: %{ + cmd: 193, + rows: 1, + status: %{ + atnn?: false, + count?: true, + error?: false, + final?: true, + inxact?: false, + more?: false, + rpc_in_batch?: false, + srverror?: false + } + } + ] = tokens + + assert col_meta.name == "bar" + assert col_meta.data_reader == :shortlen + assert col_meta.length == 3 + assert col_meta.handler == Tds.Type.String + assert col_meta.encoding == :single_byte + + assert col_meta.collation == %Tds.Protocol.Collation{ + codepage: "WINDOWS-1252", + col_flags: 0, + lcid: 36_941, + sort_id: 52, + version: 0 + } end @package_data << diff --git a/test/protocol/binary_test.exs b/test/protocol/binary_test.exs new file mode 100644 index 0000000..1f1eabb --- /dev/null +++ b/test/protocol/binary_test.exs @@ -0,0 +1,199 @@ +defmodule Tds.Protocol.BinaryTest do + use ExUnit.Case, async: true + + import Tds.Protocol.Binary + + # --------------------------------------------------------------------------- + # Little-endian macros (BinaryUtils baseline) + # --------------------------------------------------------------------------- + + test "byte/0 works in pattern match" do + <> = <<0xFF>> + assert val == 255 + end + + test "ushort/0 works in pattern match (little-endian unsigned 16-bit)" do + <> = <<0x01, 0x00>> + assert val == 1 + end + + test "ulong/0 works in pattern match (little-endian unsigned 32-bit)" do + <> = <<0x01, 0x00, 0x00, 0x00>> + assert val == 1 + end + + test "dword/0 is alias for ulong/0" do + <> = <<0x04, 0x00, 0x00, 0x00>> + assert val == 4 + end + + test "long/0 works (little-endian signed 32-bit)" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF>> + assert val == -1 + end + + test "longlong/0 works (little-endian signed 64-bit)" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>> + assert val == -1 + end + + test "ulonglong/0 works (little-endian unsigned 64-bit)" do + <> = <<0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + assert val == 1 + end + + test "int16/0 works (little-endian signed 16-bit)" do + <> = <<0xFF, 0xFF>> + assert val == -1 + end + + test "float64/0 works (little-endian 64-bit float)" do + <> = <<0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xF0, 0x3F>> + assert val == 1.0 + end + + test "float32/0 works (little-endian 32-bit float)" do + <> = <<0x00, 0x00, 0x80, 0x3F>> + assert val == 1.0 + end + + test "sixbyte/0 works (unsigned 48-bit)" do + <> = <<0x00, 0x00, 0x00, 0x00, 0x00, 0x01>> + assert val == 1 + end + + test "bytelen/0 works (unsigned 8-bit length)" do + <> = <<0x0A>> + assert len == 10 + end + + test "ushortlen/0 works (little-endian unsigned 16-bit length)" do + <> = <<0x0A, 0x00>> + assert len == 10 + end + + test "longlen/0 works (little-endian signed 32-bit length)" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF>> + assert len == -1 + end + + test "ulonglonglen/0 works (little-endian unsigned 64-bit length)" do + <> = <<0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + assert len == 1 + end + + test "precision/0 and scale/0 work (unsigned 8-bit)" do + <> = <<18, 4>> + assert p == 18 + assert s == 4 + end + + test "binary/1 works for sized binary" do + <> = <<1, 2, 3>> + assert data == <<1, 2, 3>> + end + + test "binary/2 works for sized binary with unit" do + <> = <<0, 1, 0, 2>> + assert data == <<0, 1, 0, 2>> + end + + test "unicode/1 works for UCS-2 binary" do + <> = <<0x41, 0x00, 0x42, 0x00>> + assert data == <<0x41, 0x00, 0x42, 0x00>> + end + + # --------------------------------------------------------------------------- + # Big-endian macros (for prelogin headers) + # --------------------------------------------------------------------------- + + test "ushort(:big) works (big-endian unsigned 16-bit)" do + <> = <<0x00, 0x01>> + assert val == 1 + end + + test "ulong(:big) works (big-endian unsigned 32-bit)" do + <> = <<0x00, 0x00, 0x00, 0x01>> + assert val == 1 + end + + test "dword(:big) works (big-endian unsigned 32-bit)" do + <> = <<0x00, 0x00, 0x00, 0x04>> + assert val == 4 + end + + test "long(:big) works (big-endian signed 32-bit)" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF>> + assert val == -1 + end + + test "longlong(:big) works (big-endian signed 64-bit)" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>> + assert val == -1 + end + + test "ulonglong(:big) works (big-endian unsigned 64-bit)" do + <> = <<0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01>> + assert val == 1 + end + + test "explicit :little matches default (no argument)" do + bytes = <<0x01, 0x00>> + <> = bytes + <> = bytes + assert default == explicit + assert default == 1 + end + + test "big-endian and little-endian produce different results for same bytes" do + bytes = <<0x01, 0x00>> + <> = bytes + <> = bytes + assert le == 1 + assert be == 256 + end + + # --------------------------------------------------------------------------- + # Parameterized macros (from Grammar, for collation etc.) + # --------------------------------------------------------------------------- + + test "bit/1 works for multi-bit fields" do + # 20-bit + 8-bit + 4-bit = 32 bits = 4 bytes + <> = <<0xAB, 0xCD, 0xEF, 0x12>> + assert is_integer(a) + assert is_integer(b) + assert is_integer(c) + assert a + b + c >= 0 + end + + test "byte/1 works for multi-byte fields" do + <> = <<0, 0, 0, 0, 0>> + assert data == 0 + end + + test "uchar/1 works for multi-byte unsigned chars" do + <> = <<0x01, 0x02>> + assert val == 0x0102 + end + + test "unicodechar/1 works for UCS-2 character sequences" do + # 2 UCS-2 chars = 4 bytes + <> = <<0x00, 0x41, 0x00, 0x42>> + assert is_integer(data) + end + + test "bigbinary/1 works for sized binary" do + <> = <<1, 2, 3, 4>> + assert data == <<1, 2, 3, 4>> + end + + test "charbin_null/1 works for 2-byte null marker" do + <> = <<0xFF, 0xFF>> + assert val == 0xFFFF + end + + test "charbin_null/1 works for 4-byte null marker" do + <> = <<0xFF, 0xFF, 0xFF, 0xFF>> + assert val == 0xFFFFFFFF + end +end diff --git a/test/protocol/constants_test.exs b/test/protocol/constants_test.exs new file mode 100644 index 0000000..3900019 --- /dev/null +++ b/test/protocol/constants_test.exs @@ -0,0 +1,589 @@ +defmodule Tds.Protocol.ConstantsTest do + use ExUnit.Case, async: true + + require Tds.Protocol.Constants + alias Tds.Protocol.Constants + + describe "packet_type/1" do + test "prelogin" do + assert Constants.packet_type(:prelogin) == 0x12 + end + + test "sql_batch" do + assert Constants.packet_type(:sql_batch) == 0x01 + end + + test "rpc" do + assert Constants.packet_type(:rpc) == 0x03 + end + + test "tabular_result" do + assert Constants.packet_type(:tabular_result) == 0x04 + end + + test "attention" do + assert Constants.packet_type(:attention) == 0x06 + end + + test "transaction_manager" do + assert Constants.packet_type(:transaction_manager) == 0x0E + end + + test "login7" do + assert Constants.packet_type(:login7) == 0x10 + end + + test "bulk" do + assert Constants.packet_type(:bulk) == 0x07 + end + + test "fedauth_token" do + assert Constants.packet_type(:fedauth_token) == 0x08 + end + + test "sspi" do + assert Constants.packet_type(:sspi) == 0x11 + end + + test "usable in binary pattern match" do + packet = <<0x12, 0x01, 0x00, 0x08, 0x00, 0x00, 0x01, 0x00>> + <> = packet + assert type == Constants.packet_type(:prelogin) + end + end + + describe "packet_size/1" do + test "header_size" do + assert Constants.packet_size(:header_size) == 8 + end + + test "max_data_size" do + assert Constants.packet_size(:max_data_size) == 4088 + end + + test "max_packet_size" do + assert Constants.packet_size(:max_packet_size) == 4096 + end + end + + describe "tds_type/1 - fixed types" do + test "null type code" do + assert Constants.tds_type(:null) == 0x1F + end + + test "tinyint type code" do + assert Constants.tds_type(:tinyint) == 0x30 + end + + test "bit type code" do + assert Constants.tds_type(:bit) == 0x32 + end + + test "int type code" do + assert Constants.tds_type(:int) == 0x38 + end + + test "bigint type code" do + assert Constants.tds_type(:bigint) == 0x7F + end + + test "datetime type code" do + assert Constants.tds_type(:datetime) == 0x3D + end + + test "float type code" do + assert Constants.tds_type(:float) == 0x3E + end + + test "money type code" do + assert Constants.tds_type(:money) == 0x3C + end + + test "smallmoney type code" do + assert Constants.tds_type(:smallmoney) == 0x7A + end + end + + describe "tds_type/1 - variable types" do + test "uniqueidentifier type code" do + assert Constants.tds_type(:uniqueidentifier) == 0x24 + end + + test "intn type code" do + assert Constants.tds_type(:intn) == 0x26 + end + + test "nvarchar type code" do + assert Constants.tds_type(:nvarchar) == 0xE7 + end + + test "nchar type code" do + assert Constants.tds_type(:nchar) == 0xEF + end + + test "varchar type code" do + assert Constants.tds_type(:varchar) == 0x27 + end + + test "xml type code" do + assert Constants.tds_type(:xml) == 0xF1 + end + + test "image type code" do + assert Constants.tds_type(:image) == 0x22 + end + + test "text type code" do + assert Constants.tds_type(:text) == 0x23 + end + + test "ntext type code" do + assert Constants.tds_type(:ntext) == 0x63 + end + + test "variant type code" do + assert Constants.tds_type(:variant) == 0x62 + end + + test "daten type code" do + assert Constants.tds_type(:daten) == 0x28 + end + + test "timen type code" do + assert Constants.tds_type(:timen) == 0x29 + end + + test "datetime2n type code" do + assert Constants.tds_type(:datetime2n) == 0x2A + end + + test "datetimeoffsetn type code" do + assert Constants.tds_type(:datetimeoffsetn) == 0x2B + end + + test "bigvarbinary type code" do + assert Constants.tds_type(:bigvarbinary) == 0xA5 + end + + test "bigvarchar type code" do + assert Constants.tds_type(:bigvarchar) == 0xA7 + end + + test "bigbinary type code" do + assert Constants.tds_type(:bigbinary) == 0xAD + end + + test "bigchar type code" do + assert Constants.tds_type(:bigchar) == 0xAF + end + + test "udt type code" do + assert Constants.tds_type(:udt) == 0xF0 + end + + test "json type code" do + assert Constants.tds_type(:json) == 0xF4 + end + + test "vector type code" do + assert Constants.tds_type(:vector) == 0xF5 + end + + test "decimal legacy type code" do + assert Constants.tds_type(:decimal) == 0x37 + end + + test "numeric legacy type code" do + assert Constants.tds_type(:numeric) == 0x3F + end + + test "usable in binary pattern match" do + data = <<0x26, 0x04, 0x01, 0x00, 0x00, 0x00>> + <> = data + assert type_code == Constants.tds_type(:intn) + end + end + + describe "fixed_data_types/0" do + test "returns a map of type code to byte length" do + types = Constants.fixed_data_types() + assert is_map(types) + assert Map.get(types, 0x1F) == 0 + assert Map.get(types, 0x30) == 1 + assert Map.get(types, 0x32) == 1 + assert Map.get(types, 0x34) == 2 + assert Map.get(types, 0x38) == 4 + assert Map.get(types, 0x3C) == 8 + assert Map.get(types, 0x7F) == 8 + end + end + + describe "is_fixed_type?/1" do + test "returns true for fixed type codes" do + assert Constants.is_fixed_type?(0x1F) == true + assert Constants.is_fixed_type?(0x30) == true + assert Constants.is_fixed_type?(0x38) == true + assert Constants.is_fixed_type?(0x7F) == true + end + + test "returns false for variable type codes" do + assert Constants.is_fixed_type?(0x26) == false + assert Constants.is_fixed_type?(0xE7) == false + assert Constants.is_fixed_type?(0x24) == false + end + + test "returns false for unknown type codes" do + assert Constants.is_fixed_type?(0x00) == false + assert Constants.is_fixed_type?(0xFF) == false + end + end + + describe "fixed_type_length/1" do + test "returns length for known fixed types" do + assert Constants.fixed_type_length(0x1F) == 0 + assert Constants.fixed_type_length(0x30) == 1 + assert Constants.fixed_type_length(0x34) == 2 + assert Constants.fixed_type_length(0x38) == 4 + assert Constants.fixed_type_length(0x3D) == 8 + end + + test "returns nil for non-fixed types" do + assert Constants.fixed_type_length(0x26) == nil + assert Constants.fixed_type_length(0xE7) == nil + end + end + + describe "token/1" do + test "offset" do + assert Constants.token(:offset) == 0x78 + end + + test "returnstatus" do + assert Constants.token(:returnstatus) == 0x79 + end + + test "colmetadata" do + assert Constants.token(:colmetadata) == 0x81 + end + + test "altmetadata" do + assert Constants.token(:altmetadata) == 0x88 + end + + test "dataclassification" do + assert Constants.token(:dataclassification) == 0xA3 + end + + test "tabname" do + assert Constants.token(:tabname) == 0xA4 + end + + test "colinfo" do + assert Constants.token(:colinfo) == 0xA5 + end + + test "order" do + assert Constants.token(:order) == 0xA9 + end + + test "error" do + assert Constants.token(:error) == 0xAA + end + + test "info" do + assert Constants.token(:info) == 0xAB + end + + test "returnvalue" do + assert Constants.token(:returnvalue) == 0xAC + end + + test "loginack" do + assert Constants.token(:loginack) == 0xAD + end + + test "featureextack" do + assert Constants.token(:featureextack) == 0xAE + end + + test "row" do + assert Constants.token(:row) == 0xD1 + end + + test "nbcrow" do + assert Constants.token(:nbcrow) == 0xD2 + end + + test "altrow" do + assert Constants.token(:altrow) == 0xD3 + end + + test "envchange" do + assert Constants.token(:envchange) == 0xE3 + end + + test "sessionstate" do + assert Constants.token(:sessionstate) == 0xE4 + end + + test "sspi" do + assert Constants.token(:sspi) == 0xED + end + + test "fedauthinfo" do + assert Constants.token(:fedauthinfo) == 0xEE + end + + test "done" do + assert Constants.token(:done) == 0xFD + end + + test "doneproc" do + assert Constants.token(:doneproc) == 0xFE + end + + test "doneinproc" do + assert Constants.token(:doneinproc) == 0xFF + end + + test "usable in binary pattern match" do + stream = <<0xFD, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + <> = stream + assert tok == Constants.token(:done) + end + end + + describe "encryption/1" do + test "off" do + assert Constants.encryption(:off) == 0x00 + end + + test "on" do + assert Constants.encryption(:on) == 0x01 + end + + test "not_supported" do + assert Constants.encryption(:not_supported) == 0x02 + end + + test "required" do + assert Constants.encryption(:required) == 0x03 + end + end + + describe "prelogin_token_type/1" do + test "version" do + assert Constants.prelogin_token_type(:version) == 0x00 + end + + test "encryption" do + assert Constants.prelogin_token_type(:encryption) == 0x01 + end + + test "terminator" do + assert Constants.prelogin_token_type(:terminator) == 0xFF + end + + test "fed_auth_required" do + assert Constants.prelogin_token_type(:fed_auth_required) == 0x06 + end + + test "nonce_opt" do + assert Constants.prelogin_token_type(:nonce_opt) == 0x07 + end + end + + describe "time_byte_length/1" do + test "scale 0 maps to 3 bytes" do + assert Constants.time_byte_length(0) == 3 + end + + test "scale 1 maps to 3 bytes" do + assert Constants.time_byte_length(1) == 3 + end + + test "scale 2 maps to 3 bytes" do + assert Constants.time_byte_length(2) == 3 + end + + test "scale 3 maps to 4 bytes" do + assert Constants.time_byte_length(3) == 4 + end + + test "scale 4 maps to 4 bytes" do + assert Constants.time_byte_length(4) == 4 + end + + test "scale 5 maps to 5 bytes" do + assert Constants.time_byte_length(5) == 5 + end + + test "scale 6 maps to 5 bytes" do + assert Constants.time_byte_length(6) == 5 + end + + test "scale 7 maps to 5 bytes" do + assert Constants.time_byte_length(7) == 5 + end + end + + describe "plp/1" do + test "plp_null" do + assert Constants.plp(:null) == 0xFFFFFFFFFFFFFFFF + end + + test "plp_unknown_length" do + assert Constants.plp(:unknown_length) == 0xFFFFFFFFFFFFFFFE + end + + test "plp_marker_length" do + assert Constants.plp(:marker_length) == 0xFFFF + end + + test "max_short_data_size" do + assert Constants.plp(:max_short_data_size) == 8000 + end + end + + describe "envchange_type/1" do + test "database" do + assert Constants.envchange_type(:database) == 0x01 + end + + test "packet_size" do + assert Constants.envchange_type(:packet_size) == 0x04 + end + + test "begin_transaction" do + assert Constants.envchange_type(:begin_transaction) == 0x08 + end + + test "commit_transaction" do + assert Constants.envchange_type(:commit_transaction) == 0x09 + end + + test "rollback_transaction" do + assert Constants.envchange_type(:rollback_transaction) == 0x0A + end + + test "routing_info" do + assert Constants.envchange_type(:routing_info) == 0x14 + end + + test "sql_collation" do + assert Constants.envchange_type(:sql_collation) == 0x07 + end + + test "transaction_ended" do + assert Constants.envchange_type(:transaction_ended) == 0x11 + end + end + + describe "isolation_level/1" do + test "read_uncommitted" do + assert Constants.isolation_level(:read_uncommitted) == 0x01 + end + + test "read_committed" do + assert Constants.isolation_level(:read_committed) == 0x02 + end + + test "repeatable_read" do + assert Constants.isolation_level(:repeatable_read) == 0x03 + end + + test "snapshot" do + assert Constants.isolation_level(:snapshot) == 0x04 + end + + test "serializable" do + assert Constants.isolation_level(:serializable) == 0x05 + end + end + + describe "tds_version/1" do + test "tds_7_0" do + assert Constants.tds_version(:tds_7_0) == 0x70000000 + end + + test "tds_7_1" do + assert Constants.tds_version(:tds_7_1) == 0x71000001 + end + + test "tds_7_2" do + assert Constants.tds_version(:tds_7_2) == 0x72090002 + end + + test "tds_7_3a" do + assert Constants.tds_version(:tds_7_3a) == 0x730A0003 + end + + test "tds_7_3b" do + assert Constants.tds_version(:tds_7_3b) == 0x730B0003 + end + + test "tds_7_4" do + assert Constants.tds_version(:tds_7_4) == 0x74000004 + end + + test "usable in binary pattern match" do + data = <<0x74, 0x00, 0x00, 0x04>> + <> = data + assert ver == Constants.tds_version(:tds_7_4) + end + end + + describe "feature_id/1" do + test "sessionrecovery" do + assert Constants.feature_id(:sessionrecovery) == 0x01 + end + + test "fedauth" do + assert Constants.feature_id(:fedauth) == 0x02 + end + + test "columnencryption" do + assert Constants.feature_id(:columnencryption) == 0x04 + end + + test "globaltransactions" do + assert Constants.feature_id(:globaltransactions) == 0x05 + end + + test "azuresqlsupport" do + assert Constants.feature_id(:azuresqlsupport) == 0x08 + end + + test "dataclassification" do + assert Constants.feature_id(:dataclassification) == 0x09 + end + + test "utf8_support" do + assert Constants.feature_id(:utf8_support) == 0x0A + end + + test "azuresqldnscaching" do + assert Constants.feature_id(:azuresqldnscaching) == 0x0B + end + + test "jsonsupport" do + assert Constants.feature_id(:jsonsupport) == 0x0D + end + + test "vectorsupport" do + assert Constants.feature_id(:vectorsupport) == 0x0E + end + + test "enhancedroutingsupport" do + assert Constants.feature_id(:enhancedroutingsupport) == 0x0F + end + + test "useragent" do + assert Constants.feature_id(:useragent) == 0x10 + end + + test "terminator" do + assert Constants.feature_id(:terminator) == 0xFF + end + end +end diff --git a/test/protocol/packet_test.exs b/test/protocol/packet_test.exs new file mode 100644 index 0000000..00d9aa9 --- /dev/null +++ b/test/protocol/packet_test.exs @@ -0,0 +1,311 @@ +defmodule Tds.Protocol.PacketTest do + use ExUnit.Case, async: true + use ExUnitProperties + + alias Tds.Protocol.Packet + + defmodule MockSocket do + def start(responses) do + {:ok, agent} = Agent.start_link(fn -> responses end) + agent + end + + def recv(agent, _length) do + Agent.get_and_update(agent, fn + [] -> {{:error, :empty}, []} + [resp | rest] -> {resp, rest} + end) + end + + def stop(agent), do: Agent.stop(agent) + end + + describe "encode/2" do + test "empty payload returns empty list" do + assert [] = Packet.encode(0x01, <<>>) + end + + test "1-byte payload produces single packet with EOM" do + [packet] = Packet.encode(0x01, <<0xAB>>) + bin = IO.iodata_to_binary(packet) + assert byte_size(bin) == 9 + <<0x01, 0x01, 0x00, 0x09, 0x00, 0x00, 0x01, 0x00, 0xAB>> = bin + end + + test "exactly 4088 bytes produces single packet" do + payload = :binary.copy(<<0xAB>>, 4088) + packets = Packet.encode(0x01, payload) + assert length(packets) == 1 + + bin = IO.iodata_to_binary(hd(packets)) + assert byte_size(bin) == 4096 + <<0x01, 0x01, _::binary>> = bin + end + + test "4089 bytes produces two packets" do + payload = :binary.copy(<<0xAB>>, 4089) + packets = Packet.encode(0x01, payload) + assert length(packets) == 2 + + [p1, p2] = Enum.map(packets, &IO.iodata_to_binary/1) + <<0x01, 0x00, _::binary>> = p1 + assert byte_size(p1) == 4096 + <<0x01, 0x01, _::binary>> = p2 + assert byte_size(p2) == 9 + end + + test "exact multiple of 4088 bytes" do + payload = :binary.copy(<<0xAB>>, 4088 * 3) + packets = Packet.encode(0x01, payload) + assert length(packets) == 3 + + bins = Enum.map(packets, &IO.iodata_to_binary/1) + <<_, 0x00, _::binary>> = Enum.at(bins, 0) + <<_, 0x00, _::binary>> = Enum.at(bins, 1) + <<_, 0x01, _::binary>> = Enum.at(bins, 2) + end + + test "packet IDs increment starting from 1" do + payload = :binary.copy(<<0xAB>>, 4088 * 3) + packets = Packet.encode(0x01, payload) + + ids = + Enum.map(packets, fn p -> + <<_::binary-6, id::8, _::binary>> = IO.iodata_to_binary(p) + id + end) + + assert ids == [1, 2, 3] + end + + test "packet IDs wrap at 256" do + payload = :binary.copy(<<0xAB>>, 4088 * 256 + 1) + packets = Packet.encode(0x01, payload) + assert length(packets) == 257 + + ids = + Enum.map(packets, fn p -> + <<_::binary-6, id::8, _::binary>> = IO.iodata_to_binary(p) + id + end) + + assert Enum.at(ids, 0) == 1 + assert Enum.at(ids, 254) == 255 + assert Enum.at(ids, 255) == 0 + assert Enum.at(ids, 256) == 1 + end + + test "different packet types encode correctly" do + types = [0x01, 0x03, 0x06, 0x0E, 0x10, 0x12] + + for type <- types do + [packet] = Packet.encode(type, <<1, 2, 3>>) + <<^type, _::binary>> = IO.iodata_to_binary(packet) + end + end + + test "SPID is always zero in encoded packets" do + [packet] = Packet.encode(0x01, <<1, 2, 3>>) + <<_::binary-4, spid::16, _::binary>> = IO.iodata_to_binary(packet) + assert spid == 0 + end + + test "round-trip: stripping headers recovers payload" do + payload = :binary.copy(<<0xAB>>, 4088 * 3 + 500) + packets = Packet.encode(0x01, payload) + + reassembled = + packets + |> Enum.map(fn p -> + <<_::binary-8, data::binary>> = IO.iodata_to_binary(p) + data + end) + |> IO.iodata_to_binary() + + assert reassembled == payload + end + + property "encode then strip headers recovers arbitrary payloads" do + check all( + size <- integer(0..50_000), + type <- member_of([0x01, 0x03, 0x06, 0x0E, 0x10, 0x12]), + byte_val <- integer(0..255) + ) do + payload = :binary.copy(<>, size) + packets = Packet.encode(type, payload) + + reassembled = + packets + |> Enum.map(fn p -> + <<_::binary-8, data::binary>> = IO.iodata_to_binary(p) + data + end) + |> IO.iodata_to_binary() + + assert reassembled == payload + end + end + end + + describe "reassemble/2" do + test "single packet response" do + payload = :binary.copy(<<0xAB>>, 100) + [packet] = Packet.encode(0x01, payload) + wire = IO.iodata_to_binary(packet) + + agent = MockSocket.start([{:ok, wire}]) + + assert {:ok, 0x01, ^payload} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + test "multi-packet response" do + payload = :binary.copy(<<0xCD>>, 4088 * 2 + 500) + packets = Packet.encode(0x01, payload) + + responses = + Enum.map(packets, fn p -> + {:ok, IO.iodata_to_binary(p)} + end) + + agent = MockSocket.start(responses) + + assert {:ok, 0x01, ^payload} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + test "returns error on recv failure" do + agent = MockSocket.start([{:error, :closed}]) + + assert {:error, {:recv_failed, :closed}} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + test "returns error when payload exceeds max size" do + payload = :binary.copy(<<0xAB>>, 200) + [packet] = Packet.encode(0x01, payload) + wire = IO.iodata_to_binary(packet) + + agent = MockSocket.start([{:ok, wire}]) + + assert {:error, {:payload_too_large, _, 100}} = + Packet.reassemble( + {MockSocket, agent}, + max_payload_size: 100 + ) + + MockSocket.stop(agent) + end + + test "returns error on out-of-order packet IDs" do + data1 = :binary.copy(<<0xAA>>, 100) + data2 = :binary.copy(<<0xBB>>, 50) + + pkt1 = + <<0x01, 0x00, 108::16-big, 0::16, 1, 0, data1::binary>> + + pkt2 = + <<0x01, 0x01, 58::16-big, 0::16, 5, 0, data2::binary>> + + agent = MockSocket.start([{:ok, pkt1}, {:ok, pkt2}]) + + assert {:error, {:out_of_order, expected: 2, got: 5}} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + test "handles partial recv (data split across reads)" do + payload = :binary.copy(<<0xEE>>, 100) + [packet] = Packet.encode(0x01, payload) + wire = IO.iodata_to_binary(packet) + + <> = wire + + agent = + MockSocket.start([{:ok, part1}, {:ok, part2}]) + + assert {:ok, 0x01, ^payload} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + test "handles two packets delivered in single recv" do + payload = :binary.copy(<<0xFF>>, 4088 + 100) + packets = Packet.encode(0x01, payload) + + wire = + packets + |> Enum.map(&IO.iodata_to_binary/1) + |> IO.iodata_to_binary() + + agent = MockSocket.start([{:ok, wire}]) + + assert {:ok, 0x01, ^payload} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + + @tag :slow + test "validates packet ID wrap at 256" do + payload = :binary.copy(<<0xDD>>, 4088 * 256 + 1) + packets = Packet.encode(0x01, payload) + + responses = + Enum.map(packets, fn p -> + {:ok, IO.iodata_to_binary(p)} + end) + + agent = MockSocket.start(responses) + + assert {:ok, 0x01, ^payload} = + Packet.reassemble({MockSocket, agent}) + + MockSocket.stop(agent) + end + end + + describe "decode_header/1" do + test "parses valid 8-byte header" do + data = + <<0x04, 0x01, 0x00, 0x0D, 0x34, 0x00, 0x05, 0x00, "hello">> + + assert {:ok, header, "hello"} = Packet.decode_header(data) + assert header.type == 0x04 + assert header.status == 0x01 + assert header.length == 13 + assert header.spid == 0x0034 + assert header.packet_id == 5 + assert header.window == 0 + end + + test "parses header with no remaining data" do + data = <<0x04, 0x01, 0x00, 0x08, 0x00, 0x00, 0x01, 0x00>> + assert {:ok, header, <<>>} = Packet.decode_header(data) + assert header.type == 0x04 + assert header.length == 8 + end + + test "returns error for fewer than 8 bytes" do + assert {:error, :incomplete_header} = + Packet.decode_header(<<1, 2, 3>>) + end + + test "returns error for empty binary" do + assert {:error, :incomplete_header} = Packet.decode_header(<<>>) + end + + test "returns error for exactly 7 bytes" do + assert {:error, :incomplete_header} = + Packet.decode_header(<<0, 0, 0, 0, 0, 0, 0>>) + end + end +end diff --git a/test/query_test.exs b/test/query_test.exs index 3544493..604d6b2 100644 --- a/test/query_test.exs +++ b/test/query_test.exs @@ -54,21 +54,33 @@ defmodule QueryTest do [] ) - assert [ - [ - 1, - false, - 12, - 100, - {{2014, 01, 10}, {12, 30, 0, 0}}, - 0.5, - -822_337_203_685_477.5808, - {{2014, 01, 11}, {11, 34, 25, 0}}, - 5.6, - -214_748.3648, - 1000 - ] - ] == query("SELECT TOP(1) * FROM FixedLength", []) + [ + [ + tiny, + bit_val, + small, + int_val, + sdt, + real_val, + money_val, + dt, + float_val, + small_money, + big_int + ] + ] = query("SELECT TOP(1) * FROM FixedLength", []) + + assert tiny == 1 + assert bit_val == false + assert small == 12 + assert int_val == 100 + assert sdt == ~N[2014-01-10 12:30:00] + assert real_val == 0.5 + assert Decimal.equal?(money_val, Decimal.new("-822337203685477.5808")) + assert dt == ~N[2014-01-11 11:34:25.000] + assert float_val == 5.6 + assert Decimal.equal?(small_money, Decimal.new("-214748.3648")) + assert big_int == 1000 query("DROP TABLE FixedLength", []) end @@ -79,7 +91,11 @@ defmodule QueryTest do assert [[1, 1]] = query("SELECT 1, 1", []) assert [[-1]] = query("SELECT -1", []) - assert [[10_000_000_000_000]] = query("select CAST(10000000000000 AS bigint)", []) + assert [[10_000_000_000_000]] = + query( + "select CAST(10000000000000 AS bigint)", + [] + ) assert [["string"]] = query("SELECT 'string'", []) @@ -87,18 +103,28 @@ defmodule QueryTest do assert [["ẽstring"]] = query("SELECT N'ẽstring'", []) Application.delete_env(:tds, :text_encoder) - assert [[true, false]] = query("SELECT CAST(1 AS BIT), CAST(0 AS BIT)", []) - uuid = Tds.Types.UUID.bingenerate() - {:ok, uuid_string} = Tds.Types.UUID.load(uuid) - - assert [[^uuid]] = + assert [[true, false]] = query( - """ - SELECT - CAST('#{uuid_string}' AS uniqueidentifier) - """, + "SELECT CAST(1 AS BIT), CAST(0 AS BIT)", [] ) + + # UUID roundtrip: bingenerate creates mixed-endian bytes. + # load() converts to RFC 4122 string. SQL Server parses + # the string and stores mixed-endian. Decode returns + # the wire bytes as-is during transition. + wire_uuid = Tds.Types.UUID.bingenerate() + {:ok, uuid_string} = Tds.Types.UUID.load(wire_uuid) + + [[decoded_uuid]] = + query( + "SELECT CAST('#{uuid_string}' AS uniqueidentifier)", + [] + ) + + assert is_binary(decoded_uuid) + assert byte_size(decoded_uuid) == 16 + assert decoded_uuid == wire_uuid end test "Decode NULL", context do diff --git a/test/rpc_test.exs b/test/rpc_test.exs index 7ab6d15..297c390 100644 --- a/test/rpc_test.exs +++ b/test/rpc_test.exs @@ -68,7 +68,7 @@ defmodule RPCTest do Enum.each(nums, fn num -> assert_raise(ArgumentError, fn -> - Tds.Types.encode_data("@1", num, :integer) + Tds.Type.Integer.encode(num, %{}) end) end) end diff --git a/test/tds/type/binary_test.exs b/test/tds/type/binary_test.exs new file mode 100644 index 0000000..5fdde5f --- /dev/null +++ b/test/tds/type/binary_test.exs @@ -0,0 +1,318 @@ +defmodule Tds.Type.BinaryTest do + use ExUnit.Case, async: true + + alias Tds.Type.Binary, as: BinType + alias Tds.Encoding.UCS2 + + describe "type_codes/0" do + test "returns all 5 binary-related type codes" do + codes = BinType.type_codes() + + # bigbinary + assert 0xAD in codes + # bigvarbinary + assert 0xA5 in codes + # image + assert 0x22 in codes + # legacy binary + assert 0x2D in codes + # legacy varbinary + assert 0x25 in codes + assert length(codes) == 5 + end + end + + describe "type_names/0" do + test "returns :binary and :image" do + assert BinType.type_names() == [:binary, :image] + end + end + + # -- decode_metadata ----------------------------------------------- + + describe "decode_metadata/1 for bigbinary (0xAD)" do + test "reads 2-byte LE max_length, shortlen reader" do + tail = <<0xAA, 0xBB>> + input = <<0xAD, 200::little-unsigned-16>> <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.length == 200 + end + + test "PLP marker 0xFFFF sets data_reader to :plp" do + input = <<0xAD, 0xFF, 0xFF, 0xCC>> + + assert {:ok, meta, <<0xCC>>} = BinType.decode_metadata(input) + assert meta.data_reader == :plp + end + end + + describe "decode_metadata/1 for bigvarbinary (0xA5)" do + test "reads 2-byte LE max_length, shortlen reader" do + tail = <<0xDD>> + input = <<0xA5, 4000::little-unsigned-16>> <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.length == 4000 + end + + test "PLP marker 0xFFFF sets data_reader to :plp" do + input = <<0xA5, 0xFF, 0xFF, 0xEE>> + + assert {:ok, meta, <<0xEE>>} = BinType.decode_metadata(input) + assert meta.data_reader == :plp + end + end + + describe "decode_metadata/1 for legacy binary (0x2D)" do + test "reads 1-byte length, bytelen reader" do + tail = <<0x11, 0x22>> + input = <<0x2D, 100>> <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :bytelen + assert meta.length == 100 + end + end + + describe "decode_metadata/1 for legacy varbinary (0x25)" do + test "reads 1-byte length, bytelen reader" do + tail = <<0x33>> + input = <<0x25, 50>> <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :bytelen + assert meta.length == 50 + end + end + + describe "decode_metadata/1 for image (0x22)" do + test "reads 4-byte length and table name parts" do + table_name = UCS2.from_string("imgs") + table_size = div(byte_size(table_name), 2) + tail = <<0x44>> + + input = + <<0x22, 2_147_483_647::little-unsigned-32, 1::signed-8, table_size::little-unsigned-16>> <> + table_name <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :longlen + assert meta.length == 2_147_483_647 + end + + test "reads multiple table name parts" do + t1 = UCS2.from_string("dbo") + t1_size = div(byte_size(t1), 2) + t2 = UCS2.from_string("tbl") + t2_size = div(byte_size(t2), 2) + tail = <<0x55>> + + input = + <<0x22, 100::little-unsigned-32, 2::signed-8, t1_size::little-unsigned-16>> <> + t1 <> + <> <> + t2 <> tail + + assert {:ok, meta, ^tail} = BinType.decode_metadata(input) + assert meta.data_reader == :longlen + end + end + + # -- decode --------------------------------------------------------- + + describe "decode/2" do + test "nil returns nil" do + assert BinType.decode(nil, %{}) == nil + end + + test "raw binary passthrough, no character conversion" do + data = <<0x00, 0x01, 0xFF, 0xFE, 0x80, 0x7F>> + result = BinType.decode(data, %{}) + assert result == data + end + + test "returns independent copy of the data" do + # Ensure returned binary is not a sub-binary reference + big = :crypto.strong_rand_bytes(100) + <> = big + result = BinType.decode(chunk, %{}) + assert result == chunk + assert byte_size(result) == 10 + end + + test "empty binary returns empty binary" do + assert BinType.decode(<<>>, %{}) == <<>> + end + + test "preserves arbitrary bytes including invalid UTF-8" do + data = <<0xC0, 0xC1, 0xF5, 0xFF>> + assert BinType.decode(data, %{}) == data + end + end + + # -- encode --------------------------------------------------------- + + describe "encode/2" do + test "nil produces bigvarbinary PLP null" do + {type_code, meta_bin, value_bin} = BinType.encode(nil, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<0xFFFFFFFFFFFFFFFF::little-unsigned-64>> + end + + test "short binary uses shortlen format" do + data = <<1, 2, 3, 4, 5>> + {type_code, meta_bin, value_bin} = BinType.encode(data, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 5::little-unsigned-16>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<5::little-unsigned-16, 1, 2, 3, 4, 5>> + end + + test "empty binary encodes as PLP empty" do + {type_code, meta_bin, value_bin} = BinType.encode(<<>>, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<0::unsigned-64, 0::unsigned-32>> + end + + test "large binary (> 8000 bytes) uses PLP format" do + data = :crypto.strong_rand_bytes(8001) + {type_code, meta_bin, value_bin} = BinType.encode(data, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + <> = value + assert total_size == 8001 + + # Ends with PLP terminator + assert :binary.part(value, byte_size(value), -4) == + <<0::little-unsigned-32>> + end + + test "exactly 8000 bytes uses shortlen" do + data = :crypto.strong_rand_bytes(8000) + {_type_code, meta_bin, _value_bin} = BinType.encode(data, %{}) + + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 8000::little-unsigned-16>> + end + + test "integer value is coerced to single byte" do + {type_code, meta_bin, value_bin} = BinType.encode(42, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 1::little-unsigned-16>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<1::little-unsigned-16, 42>> + end + end + + # -- param_descriptor ----------------------------------------------- + + describe "param_descriptor/2" do + test "nil returns varbinary(1)" do + assert BinType.param_descriptor(nil, %{}) == "varbinary(1)" + end + + test "empty binary returns varbinary(1)" do + assert BinType.param_descriptor(<<>>, %{}) == "varbinary(1)" + end + + test "non-empty binary returns varbinary(max)" do + data = <<1, 2, 3>> + assert BinType.param_descriptor(data, %{}) == "varbinary(max)" + end + + test "large binary returns varbinary(max)" do + data = :crypto.strong_rand_bytes(9000) + assert BinType.param_descriptor(data, %{}) == "varbinary(max)" + end + + test "integer value is coerced" do + assert BinType.param_descriptor(42, %{}) == "varbinary(max)" + end + end + + # -- infer ---------------------------------------------------------- + + describe "infer/1" do + test "invalid UTF-8 binary infers as binary" do + assert {:ok, %{}} = BinType.infer(<<0xC0, 0xC1, 0xF5>>) + end + + test "nil skips" do + assert :skip = BinType.infer(nil) + end + + test "valid UTF-8 string skips (string handler takes those)" do + assert :skip = BinType.infer("hello") + end + + test "empty string skips (string handler takes those)" do + assert :skip = BinType.infer("") + end + + test "integer skips" do + assert :skip = BinType.infer(42) + end + + test "atom skips" do + assert :skip = BinType.infer(:foo) + end + end + + # -- roundtrip ------------------------------------------------------ + + describe "encode/decode roundtrip" do + test "short binary roundtrips" do + original = <<0xDE, 0xAD, 0xBE, 0xEF>> + {_type, _meta, value_bin} = BinType.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # shortlen: 2-byte length prefix + raw data + <> = value + + assert BinType.decode(data, %{}) == original + end + + test "large binary roundtrips through PLP" do + original = :crypto.strong_rand_bytes(10_000) + {_type, _meta, value_bin} = BinType.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # PLP: skip 8-byte total size, then reassemble chunks + <<_total::little-unsigned-64, chunked::binary>> = value + data = reassemble_plp(chunked) + + assert BinType.decode(data, %{}) == original + end + end + + # Helper to reassemble PLP chunks for roundtrip testing + defp reassemble_plp(<<0::little-unsigned-32, _rest::binary>>), + do: <<>> + + defp reassemble_plp(<>) do + chunk <> reassemble_plp(rest) + end +end diff --git a/test/tds/type/boolean_test.exs b/test/tds/type/boolean_test.exs new file mode 100644 index 0000000..5a09410 --- /dev/null +++ b/test/tds/type/boolean_test.exs @@ -0,0 +1,105 @@ +defmodule Tds.Type.BooleanTest do + use ExUnit.Case, async: true + + alias Tds.Type.Boolean + + describe "type_codes/0" do + test "returns bit and bitn codes" do + assert Boolean.type_codes() == [0x32, 0x68] + end + end + + describe "type_names/0" do + test "returns :boolean" do + assert Boolean.type_names() == [:boolean] + end + end + + describe "decode_metadata/1 for fixed bit (0x32)" do + test "reads no additional bytes" do + input = <<0x32, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 1}}, <<0xAA, 0xBB>>} = + Boolean.decode_metadata(input) + end + end + + describe "decode_metadata/1 for bitn (0x68)" do + test "reads 1-byte length" do + input = <<0x68, 0x01, 0xCC, 0xDD>> + + assert {:ok, %{data_reader: :bytelen}, <<0xCC, 0xDD>>} = + Boolean.decode_metadata(input) + end + end + + describe "decode/2" do + test "nil returns nil" do + assert Boolean.decode(nil, %{}) == nil + end + + test "<<0x00>> returns false" do + assert Boolean.decode(<<0x00>>, %{}) == false + end + + test "<<0x01>> returns true" do + assert Boolean.decode(<<0x01>>, %{}) == true + end + + test "any non-zero byte returns true" do + assert Boolean.decode(<<0xFF>>, %{}) == true + end + end + + describe "encode/2" do + test "nil produces bitn null encoding" do + {type_code, meta, value} = Boolean.encode(nil, %{}) + + assert type_code == 0x68 + assert IO.iodata_to_binary(meta) == <<0x68, 0x01>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "true produces bitn with 0x01" do + {type_code, meta, value} = Boolean.encode(true, %{}) + + assert type_code == 0x68 + assert IO.iodata_to_binary(meta) == <<0x68, 0x01>> + assert IO.iodata_to_binary(value) == <<0x01, 0x01>> + end + + test "false produces bitn with 0x00" do + {type_code, meta, value} = Boolean.encode(false, %{}) + + assert type_code == 0x68 + assert IO.iodata_to_binary(meta) == <<0x68, 0x01>> + assert IO.iodata_to_binary(value) == <<0x01, 0x00>> + end + end + + describe "param_descriptor/2" do + test "returns bit descriptor" do + assert Boolean.param_descriptor(true, %{}) == "bit" + assert Boolean.param_descriptor(false, %{}) == "bit" + assert Boolean.param_descriptor(nil, %{}) == "bit" + end + end + + describe "infer/1" do + test "true infers as boolean" do + assert {:ok, %{}} = Boolean.infer(true) + end + + test "false infers as boolean" do + assert {:ok, %{}} = Boolean.infer(false) + end + + test "integer skips" do + assert :skip = Boolean.infer(42) + end + + test "string skips" do + assert :skip = Boolean.infer("hello") + end + end +end diff --git a/test/tds/type/data_reader_test.exs b/test/tds/type/data_reader_test.exs new file mode 100644 index 0000000..0559b5d --- /dev/null +++ b/test/tds/type/data_reader_test.exs @@ -0,0 +1,162 @@ +defmodule Tds.Type.DataReaderTest do + use ExUnit.Case, async: true + + alias Tds.Type.DataReader + + describe "read/2 :fixed" do + test "reads fixed-length bytes" do + assert {<<1, 2, 3, 4>>, <<0xFF>>} = + DataReader.read({:fixed, 4}, <<1, 2, 3, 4, 0xFF>>) + end + + test "reads 1-byte fixed" do + assert {<<0x2A>>, <<>>} = + DataReader.read({:fixed, 1}, <<0x2A>>) + end + end + + describe "read/2 :bytelen" do + test "null marker 0x00 returns nil" do + assert {nil, <<0xFF>>} = + DataReader.read(:bytelen, <<0x00, 0xFF>>) + end + + test "reads n bytes after length prefix" do + assert {data, <<0xFF>>} = + DataReader.read(:bytelen, <<0x03, 1, 2, 3, 0xFF>>) + + assert data == <<1, 2, 3>> + end + + test "returned data is a copy (not sub-binary)" do + payload = :crypto.strong_rand_bytes(200) + input = <<200>> <> payload <> <<0xFF>> + {data, _rest} = DataReader.read(:bytelen, input) + assert :binary.referenced_byte_size(data) == byte_size(data) + end + end + + describe "read/2 :shortlen" do + test "null marker 0xFFFF returns nil" do + assert {nil, <<0xAA>>} = + DataReader.read(:shortlen, <<0xFF, 0xFF, 0xAA>>) + end + + test "reads n bytes after 2-byte LE length" do + assert {data, <<0xBB>>} = + DataReader.read( + :shortlen, + <<0x05, 0x00, 1, 2, 3, 4, 5, 0xBB>> + ) + + assert data == <<1, 2, 3, 4, 5>> + end + + test "returned data is a copy" do + payload = :crypto.strong_rand_bytes(200) + input = <<200, 0x00>> <> payload <> <<0xFF>> + {data, _rest} = DataReader.read(:shortlen, input) + assert :binary.referenced_byte_size(data) == byte_size(data) + end + end + + describe "read/2 :longlen" do + test "null marker 0x00 returns nil" do + assert {nil, <<0xCC>>} = + DataReader.read(:longlen, <<0x00, 0xCC>>) + end + + test "reads past text_ptr and timestamp" do + # text_ptr_size=2, text_ptr=0xAA 0xBB, timestamp=8 bytes, + # data_size=3, data=1 2 3 + input = + <<0x02, 0xAA, 0xBB>> <> + <<0::unsigned-64>> <> + <<0x03, 0x00, 0x00, 0x00>> <> + <<1, 2, 3>> <> + <<0xFF>> + + assert {<<1, 2, 3>>, <<0xFF>>} = + DataReader.read(:longlen, input) + end + end + + describe "read/2 :plp" do + test "null marker returns nil" do + null_marker = <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>> + + assert {nil, <<0xAA>>} = + DataReader.read(:plp, null_marker <> <<0xAA>>) + end + + test "single chunk" do + input = + <<10::little-unsigned-64>> <> + <<5::little-unsigned-32, 1, 2, 3, 4, 5>> <> + <<0::little-unsigned-32>> <> + <<0xFF>> + + assert {<<1, 2, 3, 4, 5>>, <<0xFF>>} = + DataReader.read(:plp, input) + end + + test "multiple chunks reassembled in order" do + input = + <<6::little-unsigned-64>> <> + <<3::little-unsigned-32, 1, 2, 3>> <> + <<3::little-unsigned-32, 4, 5, 6>> <> + <<0::little-unsigned-32>> <> + <<0xBB>> + + assert {<<1, 2, 3, 4, 5, 6>>, <<0xBB>>} = + DataReader.read(:plp, input) + end + + test "empty PLP (zero-length total, immediate terminator)" do + input = + <<0::little-unsigned-64>> <> + <<0::little-unsigned-32>> <> + <<0xCC>> + + assert {<<>>, <<0xCC>>} = + DataReader.read(:plp, input) + end + + test "PLP result is independent binary (not sub-binary)" do + chunk = :crypto.strong_rand_bytes(200) + + input = + <<200::little-unsigned-64>> <> + <<200::little-unsigned-32>> <> + chunk <> + <<0::little-unsigned-32>> + + {data, _rest} = DataReader.read(:plp, input) + assert :binary.referenced_byte_size(data) == byte_size(data) + end + end + + describe "read/2 :variant" do + test "zero length returns nil" do + assert {nil, <<0xAA>>} = + DataReader.read(:variant, <<0::little-unsigned-32, 0xAA>>) + end + + test "reads n bytes after 4-byte LE length" do + assert {data, <<0xBB>>} = + DataReader.read( + :variant, + <<5::little-unsigned-32, 1, 2, 3, 4, 5, 0xBB>> + ) + + assert data == <<1, 2, 3, 4, 5>> + end + + test "returned data is a copy (not sub-binary)" do + payload = :crypto.strong_rand_bytes(200) + input = <<200::little-unsigned-32>> <> payload <> <<0xFF>> + {data, _rest} = DataReader.read(:variant, input) + assert :binary.referenced_byte_size(data) == byte_size(data) + end + end +end diff --git a/test/tds/type/datetime_test.exs b/test/tds/type/datetime_test.exs new file mode 100644 index 0000000..99da926 --- /dev/null +++ b/test/tds/type/datetime_test.exs @@ -0,0 +1,758 @@ +defmodule Tds.Type.DateTimeTest do + use ExUnit.Case, async: true + + alias Tds.Type.DateTime, as: DTType + + # Epoch constants matching the wire format + @year_1900_days :calendar.date_to_gregorian_days({1900, 1, 1}) + + describe "type_codes/0" do + test "returns all 7 datetime type codes" do + codes = DTType.type_codes() + + assert 0x28 in codes + assert 0x29 in codes + assert 0x2A in codes + assert 0x2B in codes + assert 0x3A in codes + assert 0x3D in codes + assert 0x6F in codes + assert length(codes) == 7 + end + end + + describe "type_names/0" do + test "returns all datetime-related names" do + names = DTType.type_names() + + assert :date in names + assert :time in names + assert :datetime in names + assert :datetime2 in names + assert :smalldatetime in names + assert :datetimeoffset in names + end + end + + # ------------------------------------------------------------------- + # decode_metadata + # ------------------------------------------------------------------- + + describe "decode_metadata/1 for daten (0x28)" do + test "returns bytelen reader, no scale" do + input = <<0x28, 0xAA, 0xBB>> + + assert {:ok, meta, <<0xAA, 0xBB>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + end + end + + describe "decode_metadata/1 for timen (0x29)" do + test "reads 1-byte scale" do + input = <<0x29, 0x07, 0xCC>> + + assert {:ok, meta, <<0xCC>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.scale == 7 + end + + test "reads scale 0" do + input = <<0x29, 0x00, 0xDD>> + + assert {:ok, meta, <<0xDD>>} = + DTType.decode_metadata(input) + + assert meta.scale == 0 + end + end + + describe "decode_metadata/1 for datetime2n (0x2A)" do + test "reads 1-byte scale" do + input = <<0x2A, 0x03, 0xEE>> + + assert {:ok, meta, <<0xEE>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.scale == 3 + end + end + + describe "decode_metadata/1 for datetimeoffsetn (0x2B)" do + test "reads 1-byte scale" do + input = <<0x2B, 0x07, 0xFF>> + + assert {:ok, meta, <<0xFF>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.scale == 7 + end + end + + describe "decode_metadata/1 for smalldatetime (0x3A)" do + test "returns fixed 4-byte reader" do + input = <<0x3A, 0xAA>> + + assert {:ok, %{data_reader: {:fixed, 4}}, <<0xAA>>} = + DTType.decode_metadata(input) + end + end + + describe "decode_metadata/1 for datetime (0x3D)" do + test "returns fixed 8-byte reader" do + input = <<0x3D, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 8}}, <<0xBB>>} = + DTType.decode_metadata(input) + end + end + + describe "decode_metadata/1 for datetimen (0x6F)" do + test "reads 1-byte length, returns bytelen reader" do + input = <<0x6F, 0x08, 0xCC>> + + assert {:ok, meta, <<0xCC>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.length == 8 + end + + test "reads 4-byte length for smalldatetime variant" do + input = <<0x6F, 0x04, 0xDD>> + + assert {:ok, meta, <<0xDD>>} = + DTType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.length == 4 + end + end + + # ------------------------------------------------------------------- + # decode — daten (0x28) + # ------------------------------------------------------------------- + + describe "decode/2 daten" do + @meta_date %{type_code: 0x28} + + test "nil returns nil" do + assert DTType.decode(nil, @meta_date) == nil + end + + test "decodes 2024-01-01" do + days = :calendar.date_to_gregorian_days({2024, 1, 1}) - 366 + wire = <> + + assert DTType.decode(wire, @meta_date) == ~D[2024-01-01] + end + + test "decodes epoch boundary 0001-01-01" do + wire = <<0, 0, 0>> + assert DTType.decode(wire, @meta_date) == ~D[0001-01-01] + end + + test "decodes max boundary 9999-12-31" do + days = :calendar.date_to_gregorian_days({9999, 12, 31}) - 366 + wire = <> + + assert DTType.decode(wire, @meta_date) == ~D[9999-12-31] + end + end + + # ------------------------------------------------------------------- + # decode — timen (0x29) + # ------------------------------------------------------------------- + + describe "decode/2 timen" do + test "nil returns nil" do + assert DTType.decode(nil, %{type_code: 0x29, scale: 7}) == nil + end + + test "decodes midnight at scale 0 (3 bytes)" do + wire = <<0, 0, 0>> + meta = %{type_code: 0x29, scale: 0} + + assert DTType.decode(wire, meta) == ~T[00:00:00] + end + + test "decodes 12:30:45 at scale 0 (3 bytes)" do + fsec = 12 * 3600 + 30 * 60 + 45 + wire = <> + meta = %{type_code: 0x29, scale: 0} + + assert DTType.decode(wire, meta) == ~T[12:30:45] + end + + test "decodes 12:30:45 at scale 4 (4 bytes)" do + fsec = 12 * 3600 * 10_000 + 30 * 60 * 10_000 + 45 * 10_000 + wire = <> + meta = %{type_code: 0x29, scale: 4} + + assert DTType.decode(wire, meta) == ~T[12:30:45.0000] + end + + test "decodes 12:30:45 at scale 7 (5 bytes)" do + fsec = 12 * 3600 * 10_000_000 + 30 * 60 * 10_000_000 + 45 * 10_000_000 + wire = <> + meta = %{type_code: 0x29, scale: 7} + + # scale 7 > 6, truncated to microseconds (scale 6) + assert DTType.decode(wire, meta) == ~T[12:30:45.000000] + end + + test "decodes time with fractional seconds at scale 3" do + # 12:30:45.123 at scale 3: (12*3600+30*60+45)*1000 + 123 = 45045123 + fsec = (12 * 3600 + 30 * 60 + 45) * 1_000 + 123 + wire = <> + meta = %{type_code: 0x29, scale: 3} + + result = DTType.decode(wire, meta) + assert result.hour == 12 + assert result.minute == 30 + assert result.second == 45 + {usec, precision} = result.microsecond + assert precision == 3 + assert usec == 123_000 + end + end + + # ------------------------------------------------------------------- + # decode — smalldatetime (0x3A) + # ------------------------------------------------------------------- + + describe "decode/2 smalldatetime" do + @meta_sdt %{type_code: 0x3A} + + test "decodes 1900-01-01 00:00" do + wire = <<0::little-unsigned-16, 0::little-unsigned-16>> + + assert DTType.decode(wire, @meta_sdt) == + ~N[1900-01-01 00:00:00] + end + + test "decodes 2000-01-01 00:30" do + days = + :calendar.date_to_gregorian_days({2000, 1, 1}) - + @year_1900_days + + wire = <> + + assert DTType.decode(wire, @meta_sdt) == + ~N[2000-01-01 00:30:00] + end + + test "decodes with full hour/minute" do + days = + :calendar.date_to_gregorian_days({2024, 6, 15}) - + @year_1900_days + + mins = 14 * 60 + 30 + wire = <> + + assert DTType.decode(wire, @meta_sdt) == + ~N[2024-06-15 14:30:00] + end + end + + # ------------------------------------------------------------------- + # decode — datetime (0x3D) + # ------------------------------------------------------------------- + + describe "decode/2 datetime" do + @meta_dt %{type_code: 0x3D} + + test "decodes 1900-01-01 00:00:00.000" do + wire = <<0::little-signed-32, 0::little-unsigned-32>> + + assert DTType.decode(wire, @meta_dt) == + ~N[1900-01-01 00:00:00.000] + end + + test "decodes 2000-01-01 12:00:00.000" do + days = + :calendar.date_to_gregorian_days({2000, 1, 1}) - + @year_1900_days + + secs300 = round(12 * 3600 * 1000 / (10 / 3)) + wire = <> + + result = DTType.decode(wire, @meta_dt) + assert result.year == 2000 + assert result.month == 1 + assert result.day == 1 + assert result.hour == 12 + assert result.minute == 0 + assert result.second == 0 + end + + test "decodes 2000-01-01 12:34:56.123" do + days = + :calendar.date_to_gregorian_days({2000, 1, 1}) - + @year_1900_days + + ms = ((12 * 60 + 34) * 60 + 56) * 1_000 + 123 + secs300 = round(ms / (10 / 3)) + wire = <> + + result = DTType.decode(wire, @meta_dt) + assert result.year == 2000 + assert result.month == 1 + assert result.day == 1 + assert result.hour == 12 + assert result.minute == 34 + assert result.second == 56 + end + end + + # ------------------------------------------------------------------- + # decode — datetimen (0x6F) + # ------------------------------------------------------------------- + + describe "decode/2 datetimen" do + test "nil returns nil" do + meta = %{type_code: 0x6F, length: 8} + assert DTType.decode(nil, meta) == nil + end + + test "delegates 4-byte to smalldatetime" do + meta = %{type_code: 0x6F, length: 4} + wire = <<0::little-unsigned-16, 0::little-unsigned-16>> + + assert DTType.decode(wire, meta) == + ~N[1900-01-01 00:00:00] + end + + test "delegates 8-byte to datetime" do + meta = %{type_code: 0x6F, length: 8} + wire = <<0::little-signed-32, 0::little-unsigned-32>> + + assert DTType.decode(wire, meta) == + ~N[1900-01-01 00:00:00.000] + end + end + + # ------------------------------------------------------------------- + # decode — datetime2n (0x2A) + # ------------------------------------------------------------------- + + describe "decode/2 datetime2n" do + test "nil returns nil" do + meta = %{type_code: 0x2A, scale: 7} + assert DTType.decode(nil, meta) == nil + end + + test "decodes 2024-01-01 12:30:45 at scale 0" do + time_fsec = 12 * 3600 + 30 * 60 + 45 + time_bytes = <> + + days = :calendar.date_to_gregorian_days({2024, 1, 1}) - 366 + date_bytes = <> + + wire = time_bytes <> date_bytes + meta = %{type_code: 0x2A, scale: 0} + + assert DTType.decode(wire, meta) == + ~N[2024-01-01 12:30:45] + end + + test "decodes 2024-06-15 14:30:00 at scale 4" do + fsec = (14 * 3600 + 30 * 60 + 0) * 10_000 + time_bytes = <> + + days = :calendar.date_to_gregorian_days({2024, 6, 15}) - 366 + date_bytes = <> + + wire = time_bytes <> date_bytes + meta = %{type_code: 0x2A, scale: 4} + + assert DTType.decode(wire, meta) == + ~N[2024-06-15 14:30:00.0000] + end + + test "decodes at scale 7" do + fsec = (10 * 3600 + 15 * 60 + 30) * 10_000_000 + time_bytes = <> + + days = :calendar.date_to_gregorian_days({2024, 3, 5}) - 366 + date_bytes = <> + + wire = time_bytes <> date_bytes + meta = %{type_code: 0x2A, scale: 7} + + result = DTType.decode(wire, meta) + assert result.year == 2024 + assert result.month == 3 + assert result.day == 5 + assert result.hour == 10 + assert result.minute == 15 + assert result.second == 30 + end + end + + # ------------------------------------------------------------------- + # decode — datetimeoffsetn (0x2B) + # ------------------------------------------------------------------- + + describe "decode/2 datetimeoffsetn" do + test "nil returns nil" do + meta = %{type_code: 0x2B, scale: 7} + assert DTType.decode(nil, meta) == nil + end + + test "decodes UTC datetime at scale 0" do + # Wire stores UTC time + 0 offset + time_fsec = 12 * 3600 + 30 * 60 + 45 + time_bytes = <> + + days = :calendar.date_to_gregorian_days({2024, 1, 1}) - 366 + date_bytes = <> + + offset_bytes = <<0::little-signed-16>> + + wire = time_bytes <> date_bytes <> offset_bytes + meta = %{type_code: 0x2B, scale: 0} + + result = DTType.decode(wire, meta) + assert %DateTime{} = result + assert result.year == 2024 + assert result.month == 1 + assert result.day == 1 + assert result.hour == 12 + assert result.minute == 30 + assert result.second == 45 + assert result.utc_offset == 0 + end + + test "decodes positive offset (+05:30 = 330 min) as UTC" do + # Wire stores UTC time (07:00:45 UTC) + offset 330 min + # Decode returns UTC DateTime (offset discarded) + time_fsec = 7 * 3600 + 0 * 60 + 45 + time_bytes = <> + + days = :calendar.date_to_gregorian_days({2024, 1, 1}) - 366 + date_bytes = <> + + offset_bytes = <<330::little-signed-16>> + + wire = time_bytes <> date_bytes <> offset_bytes + meta = %{type_code: 0x2B, scale: 0} + + result = DTType.decode(wire, meta) + assert %DateTime{} = result + # Returns UTC time, not local time + assert result.hour == 7 + assert result.minute == 0 + assert result.second == 45 + assert result.utc_offset == 0 + end + end + + # ------------------------------------------------------------------- + # encode — Date + # ------------------------------------------------------------------- + + describe "encode/2 Date" do + test "nil produces daten null" do + {type_code, meta, value} = DTType.encode(nil, %{type: :date}) + + assert type_code == 0x28 + assert IO.iodata_to_binary(meta) == <<0x28>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "~D[2024-01-01] encodes to 3-byte LE days" do + {type_code, meta, value} = DTType.encode(~D[2024-01-01], %{type: :date}) + + assert type_code == 0x28 + assert IO.iodata_to_binary(meta) == <<0x28>> + + value_bin = IO.iodata_to_binary(value) + <<0x03, days_wire::binary-3>> = value_bin + <> = days_wire + + expected_days = + :calendar.date_to_gregorian_days({2024, 1, 1}) - 366 + + assert days == expected_days + end + end + + # ------------------------------------------------------------------- + # encode — Time + # ------------------------------------------------------------------- + + describe "encode/2 Time" do + test "nil produces timen null" do + {type_code, meta, value} = DTType.encode(nil, %{type: :time}) + + assert type_code == 0x29 + meta_bin = IO.iodata_to_binary(meta) + <<0x29, _scale>> = meta_bin + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "~T[12:30:45] encodes correctly" do + time = ~T[12:30:45] + {type_code, _meta, value} = DTType.encode(time, %{type: :time}) + + assert type_code == 0x29 + value_bin = IO.iodata_to_binary(value) + assert byte_size(value_bin) > 1 + end + + test "~T[12:30:45.123456] preserves microseconds" do + time = ~T[12:30:45.123456] + {type_code, meta, value} = DTType.encode(time, %{type: :time}) + + assert type_code == 0x29 + meta_bin = IO.iodata_to_binary(meta) + <<0x29, scale>> = meta_bin + assert scale == 6 + + value_bin = IO.iodata_to_binary(value) + assert byte_size(value_bin) > 1 + end + end + + # ------------------------------------------------------------------- + # encode — NaiveDateTime (datetime2) + # ------------------------------------------------------------------- + + describe "encode/2 NaiveDateTime" do + test "nil produces datetime2n null" do + {type_code, meta, value} = + DTType.encode(nil, %{type: :datetime2}) + + assert type_code == 0x2A + meta_bin = IO.iodata_to_binary(meta) + <<0x2A, _scale>> = meta_bin + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "~N[2024-01-01 12:30:45] encodes correctly" do + ndt = ~N[2024-01-01 12:30:45] + + {type_code, meta, value} = + DTType.encode(ndt, %{type: :datetime2}) + + assert type_code == 0x2A + meta_bin = IO.iodata_to_binary(meta) + <<0x2A, scale>> = meta_bin + assert scale == 0 + + value_bin = IO.iodata_to_binary(value) + # 1 byte length + 3 bytes time + 3 bytes date = 7 + assert byte_size(value_bin) == 7 + end + end + + # ------------------------------------------------------------------- + # encode — DateTime (datetimeoffset) + # ------------------------------------------------------------------- + + describe "encode/2 DateTime" do + test "nil produces datetimeoffsetn null" do + {type_code, meta, value} = + DTType.encode(nil, %{type: :datetimeoffset}) + + assert type_code == 0x2B + meta_bin = IO.iodata_to_binary(meta) + <<0x2B, _scale>> = meta_bin + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "UTC DateTime encodes correctly" do + {:ok, dt} = DateTime.new(~D[2024-01-01], ~T[12:30:45], "Etc/UTC") + + {type_code, _meta, value} = + DTType.encode(dt, %{type: :datetimeoffset}) + + assert type_code == 0x2B + value_bin = IO.iodata_to_binary(value) + assert byte_size(value_bin) > 1 + end + end + + # ------------------------------------------------------------------- + # param_descriptor + # ------------------------------------------------------------------- + + describe "param_descriptor/2" do + test "date" do + assert DTType.param_descriptor(~D[2024-01-01], %{type: :date}) == + "date" + end + + test "time with scale" do + time = ~T[12:30:45.123456] + + assert DTType.param_descriptor(time, %{type: :time}) == + "time(6)" + end + + test "time nil" do + assert DTType.param_descriptor(nil, %{type: :time}) == "time" + end + + test "datetime" do + assert DTType.param_descriptor(nil, %{type: :datetime}) == + "datetime" + end + + test "smalldatetime" do + assert DTType.param_descriptor(nil, %{type: :smalldatetime}) == + "smalldatetime" + end + + test "datetime2 with scale" do + ndt = ~N[2024-01-01 12:30:45.123456] + + assert DTType.param_descriptor(ndt, %{type: :datetime2}) == + "datetime2(6)" + end + + test "datetime2 nil" do + assert DTType.param_descriptor(nil, %{type: :datetime2}) == + "datetime2" + end + + test "datetimeoffset with scale" do + {:ok, dt} = + DateTime.new(~D[2024-01-01], ~T[12:30:45.123], "Etc/UTC") + + assert DTType.param_descriptor(dt, %{type: :datetimeoffset}) == + "datetimeoffset(3)" + end + + test "datetimeoffset nil" do + assert DTType.param_descriptor(nil, %{type: :datetimeoffset}) == + "datetimeoffset" + end + end + + # ------------------------------------------------------------------- + # infer + # ------------------------------------------------------------------- + + describe "infer/1" do + test "Date infers as date" do + assert {:ok, %{type: :date}} = DTType.infer(~D[2024-01-01]) + end + + test "Time infers as time" do + assert {:ok, %{type: :time}} = DTType.infer(~T[12:30:00]) + end + + test "NaiveDateTime infers as datetime2" do + assert {:ok, %{type: :datetime2}} = + DTType.infer(~N[2024-01-01 12:30:00]) + end + + test "DateTime infers as datetimeoffset" do + {:ok, dt} = + DateTime.new(~D[2024-01-01], ~T[12:30:00], "Etc/UTC") + + assert {:ok, %{type: :datetimeoffset}} = DTType.infer(dt) + end + + test "integer skips" do + assert :skip = DTType.infer(42) + end + + test "string skips" do + assert :skip = DTType.infer("2024-01-01") + end + + test "nil skips" do + assert :skip = DTType.infer(nil) + end + end + + # ------------------------------------------------------------------- + # encode/decode roundtrips + # ------------------------------------------------------------------- + + describe "encode/decode roundtrip" do + test "Date roundtrips" do + original = ~D[2024-06-15] + {_type, _meta, value_bin} = DTType.encode(original, %{type: :date}) + value = IO.iodata_to_binary(value_bin) + <<0x03, data::binary-3>> = value + + assert DTType.decode(data, %{type_code: 0x28}) == original + end + + test "Time roundtrips" do + original = ~T[14:30:00.123456] + + {_type, meta_bin, value_bin} = + DTType.encode(original, %{type: :time}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x29, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_len, data::binary>> = value + + decoded = + DTType.decode(data, %{type_code: 0x29, scale: scale}) + + assert decoded.hour == original.hour + assert decoded.minute == original.minute + assert decoded.second == original.second + {orig_us, _} = original.microsecond + {dec_us, _} = decoded.microsecond + assert dec_us == orig_us + end + + test "NaiveDateTime roundtrips at scale 0" do + original = ~N[2024-01-15 08:45:30] + + {_type, meta_bin, value_bin} = + DTType.encode(original, %{type: :datetime2}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x2A, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_len, data::binary>> = value + + decoded = + DTType.decode(data, %{type_code: 0x2A, scale: scale}) + + assert decoded == original + end + + test "DateTime UTC roundtrips" do + {:ok, original} = + DateTime.new(~D[2024-03-15], ~T[16:20:00], "Etc/UTC") + + {_type, meta_bin, value_bin} = + DTType.encode(original, %{type: :datetimeoffset}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x2B, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_len, data::binary>> = value + + decoded = + DTType.decode(data, %{type_code: 0x2B, scale: scale}) + + assert %DateTime{} = decoded + assert decoded.year == original.year + assert decoded.month == original.month + assert decoded.day == original.day + assert decoded.hour == original.hour + assert decoded.minute == original.minute + assert decoded.second == original.second + assert decoded.utc_offset == 0 + end + end +end diff --git a/test/tds/type/decimal_test.exs b/test/tds/type/decimal_test.exs new file mode 100644 index 0000000..c8a3824 --- /dev/null +++ b/test/tds/type/decimal_test.exs @@ -0,0 +1,397 @@ +defmodule Tds.Type.DecimalTest do + use ExUnit.Case, async: true + + alias Tds.Type.Decimal, as: DecType + + describe "type_codes/0" do + test "returns all four decimal/numeric type codes" do + codes = DecType.type_codes() + # decimal (0x37), numeric (0x3F), decimaln (0x6A), numericn (0x6C) + assert 0x37 in codes + assert 0x3F in codes + assert 0x6A in codes + assert 0x6C in codes + assert length(codes) == 4 + end + end + + describe "type_names/0" do + test "returns :decimal and :numeric" do + assert DecType.type_names() == [:decimal, :numeric] + end + end + + describe "decode_metadata/1" do + test "decimaln (0x6A) reads length, precision, and scale" do + # length=9, precision=18, scale=4, followed by tail bytes + input = <<0x6A, 9, 18, 4, 0xAA, 0xBB>> + + assert {:ok, meta, <<0xAA, 0xBB>>} = + DecType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.precision == 18 + assert meta.scale == 4 + end + + test "numericn (0x6C) reads length, precision, and scale" do + input = <<0x6C, 17, 38, 18, 0xCC>> + + assert {:ok, meta, <<0xCC>>} = + DecType.decode_metadata(input) + + assert meta.data_reader == :bytelen + assert meta.precision == 38 + assert meta.scale == 18 + end + + test "legacy decimal (0x37) reads length byte" do + input = <<0x37, 9, 0xDD>> + + assert {:ok, meta, <<0xDD>>} = + DecType.decode_metadata(input) + + assert meta.data_reader == :bytelen + end + + test "legacy numeric (0x3F) reads length byte" do + input = <<0x3F, 9, 0xEE>> + + assert {:ok, meta, <<0xEE>>} = + DecType.decode_metadata(input) + + assert meta.data_reader == :bytelen + end + end + + describe "decode/2" do + test "nil returns nil" do + assert DecType.decode(nil, %{precision: 10, scale: 4}) == nil + end + + test "positive value: sign 0x01 with integer bytes" do + # 10001234 as LE unsigned 4 bytes = Decimal.new("1000.1234") + value_bytes = :binary.encode_unsigned(10_001_234, :little) + data = <<0x01>> <> value_bytes + meta = %{precision: 8, scale: 4} + + result = DecType.decode(data, meta) + + assert Decimal.equal?(result, Decimal.new("1000.1234")) + assert result.sign == 1 + end + + test "negative value: sign 0x00 with integer bytes" do + # 10000000 as LE unsigned 4 bytes = Decimal.new("-1000.0000") + value_bytes = :binary.encode_unsigned(10_000_000, :little) + data = <<0x00>> <> value_bytes + meta = %{precision: 8, scale: 4} + + result = DecType.decode(data, meta) + + assert Decimal.equal?(result, Decimal.new("-1000.0000")) + assert result.sign == -1 + end + + test "zero decodes correctly" do + # Zero coefficient, positive sign + data = <<0x01, 0x00, 0x00, 0x00, 0x00>> + meta = %{precision: 10, scale: 4} + + result = DecType.decode(data, meta) + + assert Decimal.equal?(result, Decimal.new("0.0000")) + end + + test "decodes value matching existing test vector: 1000" do + # From types_test.exs: value=1000, coef=1000 + value_bytes = <<232, 3, 0, 0>> + data = <<0x01>> <> value_bytes + meta = %{precision: 8, scale: 0} + + result = DecType.decode(data, meta) + + assert Decimal.equal?(result, Decimal.new("1000")) + end + + test "decodes 99999.99999 with precision 10, scale 5" do + # 9999999999 as LE + value_bytes = :binary.encode_unsigned(9_999_999_999, :little) + data = <<0x01>> <> value_bytes + meta = %{precision: 10, scale: 5} + + result = DecType.decode(data, meta) + + assert Decimal.equal?(result, Decimal.new("99999.99999")) + end + + test "decodes max precision (38 digits)" do + max_val = 99_999_999_999_999_999_999_999_999_999_999_999_999 + value_bytes = :binary.encode_unsigned(max_val, :little) + data = <<0x01>> <> value_bytes + meta = %{precision: 38, scale: 0} + + result = DecType.decode(data, meta) + + expected = Decimal.new("99999999999999999999999999999999999999") + assert Decimal.equal?(result, expected) + end + + test "does not mutate process dictionary precision" do + precision_before = Decimal.Context.get().precision + + value_bytes = :binary.encode_unsigned(12345, :little) + data = <<0x01>> <> value_bytes + meta = %{precision: 38, scale: 2} + DecType.decode(data, meta) + + assert Decimal.Context.get().precision == precision_before + end + end + + describe "encode/2" do + test "nil produces decimaln null encoding" do + {type_code, meta, value} = DecType.encode(nil, %{}) + + assert type_code == 0x6A + assert IO.iodata_to_binary(meta) == <<0x6A, 0x01, 0x01, 0x00>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "Decimal.new(\"12345.6789\") encodes correctly" do + dec = Decimal.new("12345.6789") + {type_code, meta_bin, value_bin} = DecType.encode(dec, %{}) + + assert type_code == 0x6A + + meta = IO.iodata_to_binary(meta_bin) + # type(0x6A) + value_size + precision(9) + scale(4) + <<0x6A, value_size, precision, scale>> = meta + assert precision == 9 + assert scale == 4 + + value = IO.iodata_to_binary(value_bin) + # byte_len + sign + LE value + <> = value + assert byte_len == value_size + assert sign == 1 + + int_val = + rest + |> :binary.bin_to_list() + |> Enum.reject(&(&1 == 0)) + |> :binary.list_to_bin() + |> :binary.decode_unsigned(:little) + + assert int_val == 123_456_789 + end + + test "Decimal.new(\"0\") encodes zero" do + dec = Decimal.new("0") + {type_code, meta_bin, value_bin} = DecType.encode(dec, %{}) + + assert type_code == 0x6A + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + assert precision == 1 + assert scale == 0 + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, sign, _rest::binary>> = value + assert sign == 1 + end + + test "Decimal.new(\"-123.45\") encodes with correct sign" do + dec = Decimal.new("-123.45") + {type_code, _meta_bin, value_bin} = DecType.encode(dec, %{}) + + assert type_code == 0x6A + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, sign, rest::binary>> = value + assert sign == 0 + + int_val = + rest + |> :binary.bin_to_list() + |> Enum.reject(&(&1 == 0)) + |> :binary.list_to_bin() + |> :binary.decode_unsigned(:little) + + assert int_val == 12345 + end + + test "scientific notation Decimal.new(\"1E+3\") handled correctly" do + dec = Decimal.new("1E+3") + {type_code, meta_bin, value_bin} = DecType.encode(dec, %{}) + + assert type_code == 0x6A + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + assert precision == 4 + assert scale == 0 + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, sign, rest::binary>> = value + assert sign == 1 + + int_val = + rest + |> :binary.bin_to_list() + |> Enum.reject(&(&1 == 0)) + |> :binary.list_to_bin() + |> :binary.decode_unsigned(:little) + + assert int_val == 1000 + end + + test "does not mutate process dictionary precision" do + precision_before = Decimal.Context.get().precision + + dec = Decimal.new("12345.6789") + DecType.encode(dec, %{}) + + assert Decimal.Context.get().precision == precision_before + end + end + + describe "param_descriptor/2" do + test "nil returns decimal(1, 0)" do + assert DecType.param_descriptor(nil, %{}) == "decimal(1, 0)" + end + + test "Decimal with fractional part returns correct precision/scale" do + dec = Decimal.new("12345.6789") + assert DecType.param_descriptor(dec, %{}) == "decimal(9, 4)" + end + + test "Decimal without fractional part returns scale 0" do + dec = Decimal.new("1000") + assert DecType.param_descriptor(dec, %{}) == "decimal(4, 0)" + end + + test "scientific notation Decimal returns correct descriptor" do + dec = Decimal.new("1E+3") + assert DecType.param_descriptor(dec, %{}) == "decimal(4, 0)" + end + + test "negative Decimal returns correct descriptor" do + dec = Decimal.new("-123.45") + assert DecType.param_descriptor(dec, %{}) == "decimal(5, 2)" + end + + test "zero returns decimal(1, 0)" do + dec = Decimal.new("0") + assert DecType.param_descriptor(dec, %{}) == "decimal(1, 0)" + end + + test "max precision returns decimal(38, 0)" do + dec = Decimal.new("99999999999999999999999999999999999999") + assert DecType.param_descriptor(dec, %{}) == "decimal(38, 0)" + end + end + + describe "infer/1" do + test "Decimal struct infers" do + assert {:ok, %{}} = DecType.infer(Decimal.new("42.5")) + end + + test "Decimal zero infers" do + assert {:ok, %{}} = DecType.infer(Decimal.new("0")) + end + + test "integer skips" do + assert :skip = DecType.infer(42) + end + + test "float skips" do + assert :skip = DecType.infer(3.14) + end + + test "string skips" do + assert :skip = DecType.infer("42.5") + end + + test "nil skips" do + assert :skip = DecType.infer(nil) + end + end + + describe "encode/decode roundtrip" do + test "Decimal.new(\"1000.1234\") roundtrips" do + original = Decimal.new("1000.1234") + {_type, meta_bin, value_bin} = DecType.encode(original, %{}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + # Strip the byte_len prefix to get the raw data + <<_byte_len, data::binary>> = value + + decoded = DecType.decode(data, %{precision: precision, scale: scale}) + assert Decimal.equal?(decoded, original) + end + + test "Decimal.new(\"-99999.99999\") roundtrips" do + original = Decimal.new("-99999.99999") + {_type, meta_bin, value_bin} = DecType.encode(original, %{}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, data::binary>> = value + + decoded = DecType.decode(data, %{precision: precision, scale: scale}) + assert Decimal.equal?(decoded, original) + end + + test "Decimal.new(\"1E+3\") roundtrips" do + original = Decimal.new("1E+3") + {_type, meta_bin, value_bin} = DecType.encode(original, %{}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, data::binary>> = value + + decoded = DecType.decode(data, %{precision: precision, scale: scale}) + # 1E+3 normalizes to 1000 + assert Decimal.equal?(decoded, Decimal.new("1000")) + end + + test "Decimal.new(\"0.0001\") roundtrips" do + original = Decimal.new("0.0001") + {_type, meta_bin, value_bin} = DecType.encode(original, %{}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, precision, scale>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, data::binary>> = value + + decoded = DecType.decode(data, %{precision: precision, scale: scale}) + assert Decimal.equal?(decoded, original) + end + + test "process dictionary unchanged after roundtrip" do + precision_before = Decimal.Context.get().precision + + original = Decimal.new("12345.6789") + {_type, meta_bin, value_bin} = DecType.encode(original, %{}) + + meta = IO.iodata_to_binary(meta_bin) + <<0x6A, _value_size, p, s>> = meta + + value = IO.iodata_to_binary(value_bin) + <<_byte_len, data::binary>> = value + DecType.decode(data, %{precision: p, scale: s}) + + assert Decimal.Context.get().precision == precision_before + end + end +end diff --git a/test/tds/type/float_test.exs b/test/tds/type/float_test.exs new file mode 100644 index 0000000..ac067b3 --- /dev/null +++ b/test/tds/type/float_test.exs @@ -0,0 +1,202 @@ +defmodule Tds.Type.FloatTest do + use ExUnit.Case, async: true + + alias Tds.Type.Float + + describe "type_codes/0" do + test "returns real, float, and floatn codes" do + codes = Float.type_codes() + assert 0x3B in codes + assert 0x3E in codes + assert 0x6D in codes + assert length(codes) == 3 + end + end + + describe "type_names/0" do + test "returns :float" do + assert Float.type_names() == [:float] + end + end + + describe "decode_metadata/1" do + test "real (0x3B) is fixed 4 bytes" do + input = <<0x3B, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 4}}, <<0xAA, 0xBB>>} = + Float.decode_metadata(input) + end + + test "float (0x3E) is fixed 8 bytes" do + input = <<0x3E, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 8}}, <<0xAA, 0xBB>>} = + Float.decode_metadata(input) + end + + test "floatn (0x6D) reads 1-byte length" do + input = <<0x6D, 0x04, 0xCC, 0xDD>> + + assert {:ok, %{data_reader: :bytelen, length: 4}, <<0xCC, 0xDD>>} = + Float.decode_metadata(input) + end + + test "floatn (0x6D) with length 8" do + input = <<0x6D, 0x08, 0xEE>> + + assert {:ok, %{data_reader: :bytelen, length: 8}, <<0xEE>>} = + Float.decode_metadata(input) + end + end + + describe "decode/2" do + test "nil returns nil" do + assert Float.decode(nil, %{}) == nil + end + + test "4-byte real (float-32) decodes 1.5" do + data = <<1.5::little-float-32>> + assert Float.decode(data, %{length: 4}) == 1.5 + end + + test "4-byte real (float-32) decodes 0.0" do + data = <<0.0::little-float-32>> + assert Float.decode(data, %{length: 4}) == 0.0 + end + + test "4-byte real (float-32) decodes negative" do + data = <<-3.14::little-float-32>> + result = Float.decode(data, %{length: 4}) + assert_in_delta result, -3.14, 0.001 + end + + test "8-byte float (float-64) decodes 1.5" do + data = <<1.5::little-float-64>> + assert Float.decode(data, %{length: 8}) == 1.5 + end + + test "8-byte float (float-64) decodes 0.0" do + data = <<0.0::little-float-64>> + assert Float.decode(data, %{length: 8}) == 0.0 + end + + test "8-byte float (float-64) decodes negative" do + data = <<-3.14::little-float-64>> + result = Float.decode(data, %{length: 8}) + assert_in_delta result, -3.14, 0.0000001 + end + + test "8-byte float (float-64) decodes large value" do + data = <<1.0e100::little-float-64>> + assert Float.decode(data, %{length: 8}) == 1.0e100 + end + + test "4-byte real without explicit length uses data size" do + data = <<1.5::little-float-32>> + assert Float.decode(data, %{}) == 1.5 + end + + test "8-byte float without explicit length uses data size" do + data = <<1.5::little-float-64>> + assert Float.decode(data, %{}) == 1.5 + end + end + + describe "encode/2" do + test "nil produces floatn null encoding" do + {type_code, meta, value} = Float.encode(nil, %{}) + + assert type_code == 0x6D + assert IO.iodata_to_binary(meta) == <<0x6D, 0x08>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "positive float encodes as 8-byte float-64" do + {type_code, meta, value} = Float.encode(1.5, %{}) + + assert type_code == 0x6D + assert IO.iodata_to_binary(meta) == <<0x6D, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, 1.5::little-float-64>> + end + + test "zero encodes as 8-byte float-64" do + {type_code, meta, value} = Float.encode(0.0, %{}) + + assert type_code == 0x6D + assert IO.iodata_to_binary(meta) == <<0x6D, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, 0.0::little-float-64>> + end + + test "negative float encodes as 8-byte float-64" do + {type_code, meta, value} = Float.encode(-3.14, %{}) + + assert type_code == 0x6D + assert IO.iodata_to_binary(meta) == <<0x6D, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, -3.14::little-float-64>> + end + + test "large float encodes as 8-byte float-64" do + {type_code, meta, value} = Float.encode(1.0e100, %{}) + + assert type_code == 0x6D + assert IO.iodata_to_binary(meta) == <<0x6D, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, 1.0e100::little-float-64>> + end + end + + describe "param_descriptor/2" do + test "nil returns decimal(1,0)" do + assert Float.param_descriptor(nil, %{}) == "decimal(1,0)" + end + + test "float value returns float(53)" do + assert Float.param_descriptor(1.5, %{}) == "float(53)" + end + + test "negative float returns float(53)" do + assert Float.param_descriptor(-3.14, %{}) == "float(53)" + end + + test "zero float returns float(53)" do + assert Float.param_descriptor(0.0, %{}) == "float(53)" + end + end + + describe "infer/1" do + test "positive float infers" do + assert {:ok, %{}} = Float.infer(1.5) + end + + test "zero float infers" do + assert {:ok, %{}} = Float.infer(0.0) + end + + test "negative float infers" do + assert {:ok, %{}} = Float.infer(-3.14) + end + + test "integer skips" do + assert :skip = Float.infer(42) + end + + test "string skips" do + assert :skip = Float.infer("3.14") + end + + test "boolean skips" do + assert :skip = Float.infer(true) + end + + test "nil skips" do + assert :skip = Float.infer(nil) + end + end +end diff --git a/test/tds/type/integer_test.exs b/test/tds/type/integer_test.exs new file mode 100644 index 0000000..968204e --- /dev/null +++ b/test/tds/type/integer_test.exs @@ -0,0 +1,311 @@ +defmodule Tds.Type.IntegerTest do + use ExUnit.Case, async: true + + alias Tds.Type.Integer + + describe "type_codes/0" do + test "returns integer and null type codes" do + codes = Integer.type_codes() + assert 0x1F in codes + assert 0x30 in codes + assert 0x34 in codes + assert 0x38 in codes + assert 0x7F in codes + assert 0x26 in codes + assert length(codes) == 6 + end + end + + describe "type_names/0" do + test "returns :integer" do + assert Integer.type_names() == [:integer] + end + end + + describe "decode_metadata/1" do + test "tinyint (0x30) is fixed 1 byte" do + input = <<0x30, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 1}}, <<0xAA, 0xBB>>} = + Integer.decode_metadata(input) + end + + test "smallint (0x34) is fixed 2 bytes" do + input = <<0x34, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 2}}, <<0xAA, 0xBB>>} = + Integer.decode_metadata(input) + end + + test "int (0x38) is fixed 4 bytes" do + input = <<0x38, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 4}}, <<0xAA, 0xBB>>} = + Integer.decode_metadata(input) + end + + test "bigint (0x7F) is fixed 8 bytes" do + input = <<0x7F, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 8}}, <<0xAA, 0xBB>>} = + Integer.decode_metadata(input) + end + + test "intn (0x26) reads 1-byte length" do + input = <<0x26, 0x04, 0xCC, 0xDD>> + + assert {:ok, %{data_reader: :bytelen, length: 4}, <<0xCC, 0xDD>>} = + Integer.decode_metadata(input) + end + + test "intn (0x26) with length 8" do + input = <<0x26, 0x08, 0xEE>> + + assert {:ok, %{data_reader: :bytelen, length: 8}, <<0xEE>>} = + Integer.decode_metadata(input) + end + + test "intn (0x26) with length 1" do + input = <<0x26, 0x01, 0xFF>> + + assert {:ok, %{data_reader: :bytelen, length: 1}, <<0xFF>>} = + Integer.decode_metadata(input) + end + + test "intn (0x26) with length 2" do + input = <<0x26, 0x02, 0xFF>> + + assert {:ok, %{data_reader: :bytelen, length: 2}, <<0xFF>>} = + Integer.decode_metadata(input) + end + end + + describe "decode/2" do + test "nil returns nil" do + assert Integer.decode(nil, %{}) == nil + end + + test "1-byte unsigned tinyint" do + assert Integer.decode(<<42>>, %{length: 1}) == 42 + end + + test "1-byte unsigned tinyint max value" do + assert Integer.decode(<<255>>, %{length: 1}) == 255 + end + + test "2-byte little-endian signed smallint" do + assert Integer.decode(<<0xD2, 0x04>>, %{length: 2}) == 1234 + end + + test "2-byte negative smallint" do + assert Integer.decode(<<0xFE, 0xFF>>, %{length: 2}) == -2 + end + + test "2-byte smallint -1" do + assert Integer.decode(<<0xFF, 0xFF>>, %{length: 2}) == -1 + end + + test "4-byte little-endian signed int" do + assert Integer.decode(<<42, 0, 0, 0>>, %{length: 4}) == 42 + end + + test "4-byte negative int" do + assert Integer.decode(<<0xFE, 0xFF, 0xFF, 0xFF>>, %{length: 4}) == -2 + end + + test "4-byte int max positive" do + # 2_147_483_647 = 0x7FFFFFFF + assert Integer.decode( + <<0xFF, 0xFF, 0xFF, 0x7F>>, + %{length: 4} + ) == 2_147_483_647 + end + + test "8-byte little-endian signed bigint" do + assert Integer.decode( + <<42, 0, 0, 0, 0, 0, 0, 0>>, + %{length: 8} + ) == 42 + end + + test "8-byte negative bigint" do + assert Integer.decode( + <<0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>>, + %{length: 8} + ) == -2 + end + + test "8-byte large bigint" do + # 1_000_000_000_000 = 0xE8D4A51000 + assert Integer.decode( + <<0x00, 0x10, 0xA5, 0xD4, 0xE8, 0x00, 0x00, 0x00>>, + %{length: 8} + ) == 1_000_000_000_000 + end + + test "decode without explicit length uses data size" do + assert Integer.decode(<<42>>, %{}) == 42 + assert Integer.decode(<<0xD2, 0x04>>, %{}) == 1234 + assert Integer.decode(<<42, 0, 0, 0>>, %{}) == 42 + + assert Integer.decode( + <<42, 0, 0, 0, 0, 0, 0, 0>>, + %{} + ) == 42 + end + end + + describe "encode/2" do + test "nil produces intn null encoding" do + {type_code, meta, value} = Integer.encode(nil, %{}) + + assert type_code == 0x26 + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "small positive integer encodes as 4-byte intn" do + {type_code, meta, value} = Integer.encode(42, %{}) + + assert type_code == 0x26 + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + + assert IO.iodata_to_binary(value) == + <<0x04, 42, 0, 0, 0>> + end + + test "zero encodes as 4-byte intn" do + {type_code, meta, value} = Integer.encode(0, %{}) + + assert type_code == 0x26 + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + + assert IO.iodata_to_binary(value) == + <<0x04, 0, 0, 0, 0>> + end + + test "negative integer encodes as 4-byte signed" do + {type_code, meta, value} = Integer.encode(-2, %{}) + + assert type_code == 0x26 + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + + assert IO.iodata_to_binary(value) == + <<0x04, 0xFE, 0xFF, 0xFF, 0xFF>> + end + + test "large positive encodes as 8-byte bigint" do + big = 3_000_000_000 + + {type_code, meta, value} = Integer.encode(big, %{}) + + assert type_code == 0x26 + assert IO.iodata_to_binary(meta) == <<0x26, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, big::little-signed-64>> + end + + test "max int32 boundary stays 4-byte" do + max_int32 = 2_147_483_647 + + {_type_code, meta, value} = + Integer.encode(max_int32, %{}) + + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + + assert IO.iodata_to_binary(value) == + <<0x04, max_int32::little-signed-32>> + end + + test "min int32 boundary stays 4-byte" do + min_int32 = -2_147_483_648 + + {_type_code, meta, value} = + Integer.encode(min_int32, %{}) + + assert IO.iodata_to_binary(meta) == <<0x26, 0x04>> + + assert IO.iodata_to_binary(value) == + <<0x04, min_int32::little-signed-32>> + end + + test "above int32 max uses 8-byte" do + val = 2_147_483_648 + + {_type_code, meta, value} = Integer.encode(val, %{}) + + assert IO.iodata_to_binary(meta) == <<0x26, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, val::little-signed-64>> + end + + test "below int32 min uses 8-byte" do + val = -2_147_483_649 + + {_type_code, meta, value} = Integer.encode(val, %{}) + + assert IO.iodata_to_binary(meta) == <<0x26, 0x08>> + + assert IO.iodata_to_binary(value) == + <<0x08, val::little-signed-64>> + end + end + + describe "param_descriptor/2" do + test "zero returns int" do + assert Integer.param_descriptor(0, %{}) == "int" + end + + test "positive returns bigint" do + assert Integer.param_descriptor(42, %{}) == "bigint" + assert Integer.param_descriptor(1, %{}) == "bigint" + end + + test "negative returns decimal(N, 0)" do + # -2 has string "-2", length 2, precision = 2 - 1 = 1 + assert Integer.param_descriptor(-2, %{}) == "decimal(1, 0)" + end + + test "larger negative returns decimal with correct precision" do + # -1000 has string "-1000", length 5, precision = 5 - 1 = 4 + assert Integer.param_descriptor(-1000, %{}) == + "decimal(4, 0)" + end + + test "nil returns int" do + assert Integer.param_descriptor(nil, %{}) == "int" + end + end + + describe "infer/1" do + test "integer value infers" do + assert {:ok, %{}} = Integer.infer(42) + end + + test "zero infers" do + assert {:ok, %{}} = Integer.infer(0) + end + + test "negative integer infers" do + assert {:ok, %{}} = Integer.infer(-5) + end + + test "float skips" do + assert :skip = Integer.infer(3.14) + end + + test "string skips" do + assert :skip = Integer.infer("42") + end + + test "boolean skips" do + assert :skip = Integer.infer(true) + end + + test "nil skips" do + assert :skip = Integer.infer(nil) + end + end +end diff --git a/test/tds/type/money_test.exs b/test/tds/type/money_test.exs new file mode 100644 index 0000000..66a3a4e --- /dev/null +++ b/test/tds/type/money_test.exs @@ -0,0 +1,304 @@ +defmodule Tds.Type.MoneyTest do + use ExUnit.Case, async: true + + alias Tds.Type.Money + + describe "type_codes/0" do + test "returns money, smallmoney, and moneyn codes" do + codes = Money.type_codes() + # money (0x3C), smallmoney (0x7A), moneyn (0x6E) + assert 0x3C in codes + assert 0x7A in codes + assert 0x6E in codes + assert length(codes) == 3 + end + end + + describe "type_names/0" do + test "returns :money and :smallmoney" do + assert Money.type_names() == [:money, :smallmoney] + end + end + + describe "decode_metadata/1" do + test "fixed money (0x3C) returns {:fixed, 8}" do + input = <<0x3C, 0xAA, 0xBB>> + + assert {:ok, %{data_reader: {:fixed, 8}}, <<0xAA, 0xBB>>} = + Money.decode_metadata(input) + end + + test "fixed smallmoney (0x7A) returns {:fixed, 4}" do + input = <<0x7A, 0xCC, 0xDD>> + + assert {:ok, %{data_reader: {:fixed, 4}}, <<0xCC, 0xDD>>} = + Money.decode_metadata(input) + end + + test "variable moneyn (0x6E) reads 1-byte length" do + input = <<0x6E, 0x08, 0xEE, 0xFF>> + + assert {:ok, %{data_reader: :bytelen, length: 8}, <<0xEE, 0xFF>>} = + Money.decode_metadata(input) + end + + test "variable moneyn (0x6E) with length 4" do + input = <<0x6E, 0x04, 0xAA>> + + assert {:ok, %{data_reader: :bytelen, length: 4}, <<0xAA>>} = + Money.decode_metadata(input) + end + end + + describe "decode/2 - nil" do + test "nil returns nil" do + assert Money.decode(nil, %{}) == nil + end + end + + describe "decode/2 - smallmoney (4 bytes)" do + test "decodes 1.0000 (10000 units)" do + data = <<0x10, 0x27, 0x00, 0x00>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("1.0000")) + end + + test "decodes -1.0000 (-10000 units)" do + data = <<0xF0, 0xD8, 0xFF, 0xFF>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("-1.0000")) + end + + test "decodes zero" do + data = <<0x00, 0x00, 0x00, 0x00>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("0.0000")) + end + + test "decodes max smallmoney: 214748.3647" do + # 214748.3647 * 10000 = 2147483647 = 0x7FFFFFFF + data = <<0xFF, 0xFF, 0xFF, 0x7F>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("214748.3647")) + end + + test "decodes min smallmoney: -214748.3648" do + # -214748.3648 * 10000 = -2147483648 = 0x80000000 + data = <<0x00, 0x00, 0x00, 0x80>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("-214748.3648")) + end + + test "decodes fractional: 0.0001" do + data = <<0x01, 0x00, 0x00, 0x00>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("0.0001")) + end + + test "decodes negative fractional: -0.0001" do + data = <<0xFF, 0xFF, 0xFF, 0xFF>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("-0.0001")) + end + end + + describe "decode/2 - money (8 bytes)" do + test "decodes 1.0000 (10000 units)" do + # Money wire format: high 4 bytes (LE unsigned), low 4 bytes (LE unsigned) + # 10000 = 0x00000000_00002710 + # high = 0x00000000 (LE: <<0,0,0,0>>), low = 0x00002710 (LE: <<0x10,0x27,0,0>>) + data = <<0x00, 0x00, 0x00, 0x00, 0x10, 0x27, 0x00, 0x00>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("1.0000")) + end + + test "decodes -1.0000" do + # -10000 as signed-64 = 0xFFFFFFFFFFFFD8F0 + # Split: high = 0xFFFFFFFF, low = 0xFFFFD8F0 + # high LE: <<0xFF,0xFF,0xFF,0xFF>>, low LE: <<0xF0,0xD8,0xFF,0xFF>> + data = <<0xFF, 0xFF, 0xFF, 0xFF, 0xF0, 0xD8, 0xFF, 0xFF>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("-1.0000")) + end + + test "decodes zero" do + data = <<0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("0.0000")) + end + + test "decodes max money: 922337203685477.5807" do + # 922337203685477.5807 * 10000 = 9223372036854775807 = 0x7FFFFFFFFFFFFFFF + # high = 0x7FFFFFFF (LE: <<0xFF,0xFF,0xFF,0x7F>>) + # low = 0xFFFFFFFF (LE: <<0xFF,0xFF,0xFF,0xFF>>) + data = <<0xFF, 0xFF, 0xFF, 0x7F, 0xFF, 0xFF, 0xFF, 0xFF>> + result = Money.decode(data, %{}) + expected = Decimal.new("922337203685477.5807") + assert Decimal.equal?(result, expected) + end + + test "decodes min money: -922337203685477.5808" do + # -922337203685477.5808 * 10000 = -9223372036854775808 = 0x8000000000000000 + # high = 0x80000000 (LE: <<0x00,0x00,0x00,0x80>>) + # low = 0x00000000 (LE: <<0x00,0x00,0x00,0x00>>) + data = <<0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00>> + result = Money.decode(data, %{}) + expected = Decimal.new("-922337203685477.5808") + assert Decimal.equal?(result, expected) + end + + test "decodes a value spanning both high and low words" do + # 1000000.0000 * 10000 = 10_000_000_000 + # 10_000_000_000 = 0x00000002_540BE400 + # high = 0x00000002 (LE: <<0x02,0x00,0x00,0x00>>) + # low = 0x540BE400 (LE: <<0x00,0xE4,0x0B,0x54>>) + data = <<0x02, 0x00, 0x00, 0x00, 0x00, 0xE4, 0x0B, 0x54>> + result = Money.decode(data, %{}) + assert Decimal.equal?(result, Decimal.new("1000000.0000")) + end + end + + describe "encode/2" do + test "nil produces moneyn null encoding" do + {type_code, meta, value} = Money.encode(nil, %{}) + + assert type_code == 0x6E + assert IO.iodata_to_binary(meta) == <<0x6E, 0x08>> + assert IO.iodata_to_binary(value) == <<0x00>> + end + + test "Decimal.new(\"1.0000\") encodes as money (8-byte)" do + dec = Decimal.new("1.0000") + {type_code, meta, value} = Money.encode(dec, %{}) + + assert type_code == 0x6E + assert IO.iodata_to_binary(meta) == <<0x6E, 0x08>> + + value_bin = IO.iodata_to_binary(value) + # length prefix + 8 bytes of money data + <<0x08, high::little-unsigned-32, low::little-unsigned-32>> = + value_bin + + <> = <> + assert combined == 10_000 + end + + test "Decimal.new(\"-1.0000\") encodes negative money" do + dec = Decimal.new("-1.0000") + {type_code, _meta, value} = Money.encode(dec, %{}) + + assert type_code == 0x6E + value_bin = IO.iodata_to_binary(value) + + <<0x08, high::little-unsigned-32, low::little-unsigned-32>> = + value_bin + + <> = <> + assert combined == -10_000 + end + + test "Decimal.new(\"0\") encodes as zero" do + dec = Decimal.new("0") + {_type_code, _meta, value} = Money.encode(dec, %{}) + + value_bin = IO.iodata_to_binary(value) + + <<0x08, high::little-unsigned-32, low::little-unsigned-32>> = + value_bin + + <> = <> + assert combined == 0 + end + + test "Decimal with fewer than 4 scale digits is scaled up" do + dec = Decimal.new("1.5") + {_type_code, _meta, value} = Money.encode(dec, %{}) + + value_bin = IO.iodata_to_binary(value) + + <<0x08, high::little-unsigned-32, low::little-unsigned-32>> = + value_bin + + <> = <> + # 1.5 * 10000 = 15000 + assert combined == 15_000 + end + end + + describe "param_descriptor/2" do + test "smallmoney-range value returns smallmoney" do + dec = Decimal.new("100.0000") + assert Money.param_descriptor(dec, %{}) == "smallmoney" + end + + test "nil returns money" do + assert Money.param_descriptor(nil, %{}) == "money" + end + + test "max smallmoney returns smallmoney" do + dec = Decimal.new("214748.3647") + assert Money.param_descriptor(dec, %{}) == "smallmoney" + end + + test "min smallmoney returns smallmoney" do + dec = Decimal.new("-214748.3648") + assert Money.param_descriptor(dec, %{}) == "smallmoney" + end + + test "value exceeding smallmoney range returns money" do + dec = Decimal.new("214748.3648") + assert Money.param_descriptor(dec, %{}) == "money" + end + + test "large negative value returns money" do + dec = Decimal.new("-214748.3649") + assert Money.param_descriptor(dec, %{}) == "money" + end + + test "large positive value returns money" do + dec = Decimal.new("1000000.0000") + assert Money.param_descriptor(dec, %{}) == "money" + end + end + + describe "infer/1" do + test "always returns :skip (money is decode-only)" do + assert :skip = Money.infer(Decimal.new("1.0")) + assert :skip = Money.infer(42) + assert :skip = Money.infer(nil) + assert :skip = Money.infer("100.00") + end + end + + describe "encode/decode roundtrip" do + test "Decimal.new(\"12345.6789\") roundtrips" do + original = Decimal.new("12345.6789") + {_type, _meta, value} = Money.encode(original, %{}) + value_bin = IO.iodata_to_binary(value) + <<0x08, data::binary-8>> = value_bin + + decoded = Money.decode(data, %{}) + assert Decimal.equal?(decoded, original) + end + + test "Decimal.new(\"-99999.9999\") roundtrips" do + original = Decimal.new("-99999.9999") + {_type, _meta, value} = Money.encode(original, %{}) + value_bin = IO.iodata_to_binary(value) + <<0x08, data::binary-8>> = value_bin + + decoded = Money.decode(data, %{}) + assert Decimal.equal?(decoded, original) + end + + test "zero roundtrips" do + original = Decimal.new("0.0000") + {_type, _meta, value} = Money.encode(original, %{}) + value_bin = IO.iodata_to_binary(value) + <<0x08, data::binary-8>> = value_bin + + decoded = Money.decode(data, %{}) + assert Decimal.equal?(decoded, original) + end + end +end diff --git a/test/tds/type/registry_test.exs b/test/tds/type/registry_test.exs new file mode 100644 index 0000000..c0c31f5 --- /dev/null +++ b/test/tds/type/registry_test.exs @@ -0,0 +1,115 @@ +defmodule Tds.Type.RegistryTest do + use ExUnit.Case, async: true + + alias Tds.Type.Registry + + defmodule FakeInteger do + @behaviour Tds.Type + + def type_codes, do: [0x26, 0x38] + def type_names, do: [:integer] + def decode_metadata(r), do: {:ok, %{}, r} + def decode(nil, _), do: nil + def decode(_, _), do: 42 + def encode(v, _), do: {0x26, <<>>, <>} + def param_descriptor(_, _), do: "int" + def infer(v) when is_integer(v), do: {:ok, %{}} + def infer(_), do: :skip + end + + defmodule FakeString do + @behaviour Tds.Type + + def type_codes, do: [0xE7] + def type_names, do: [:string] + def decode_metadata(r), do: {:ok, %{}, r} + def decode(nil, _), do: nil + def decode(d, _), do: d + def encode(v, _), do: {0xE7, <<>>, v} + def param_descriptor(_, _), do: "nvarchar(max)" + def infer(v) when is_binary(v), do: {:ok, %{}} + def infer(_), do: :skip + end + + defmodule UserOverride do + @behaviour Tds.Type + + def type_codes, do: [0x26] + def type_names, do: [:integer] + def decode_metadata(r), do: {:ok, %{custom: true}, r} + def decode(nil, _), do: nil + def decode(_, _), do: :custom_int + def encode(v, _), do: {0x26, <<>>, <>} + def param_descriptor(_, _), do: "int" + def infer(v) when is_integer(v), do: {:ok, %{custom: true}} + def infer(_), do: :skip + end + + setup do + {:ok, registry: Registry.new([], [FakeInteger, FakeString])} + end + + describe "handler_for_code/2" do + test "finds handler by type code", %{registry: reg} do + assert {:ok, FakeInteger} = Registry.handler_for_code(reg, 0x26) + assert {:ok, FakeInteger} = Registry.handler_for_code(reg, 0x38) + assert {:ok, FakeString} = Registry.handler_for_code(reg, 0xE7) + end + + test "returns error for unknown code", %{registry: reg} do + assert :error = Registry.handler_for_code(reg, 0x00) + end + end + + describe "handler_for_name/2" do + test "finds handler by atom name", %{registry: reg} do + assert {:ok, FakeInteger} = Registry.handler_for_name(reg, :integer) + assert {:ok, FakeString} = Registry.handler_for_name(reg, :string) + end + + test "returns error for unknown name", %{registry: reg} do + assert :error = Registry.handler_for_name(reg, :unknown) + end + end + + describe "user type override" do + test "user handler overrides built-in for same type code" do + reg = Registry.new([UserOverride], [FakeInteger, FakeString]) + assert {:ok, UserOverride} = Registry.handler_for_code(reg, 0x26) + end + + test "user handler overrides built-in for same type name" do + reg = Registry.new([UserOverride], [FakeInteger, FakeString]) + assert {:ok, UserOverride} = Registry.handler_for_name(reg, :integer) + end + + test "non-overridden types still work" do + reg = Registry.new([UserOverride], [FakeInteger, FakeString]) + assert {:ok, FakeString} = Registry.handler_for_code(reg, 0xE7) + end + end + + describe "infer/2" do + test "infers integer handler from integer value", + %{registry: reg} do + assert {:ok, FakeInteger, %{}} = Registry.infer(reg, 42) + end + + test "infers string handler from binary value", + %{registry: reg} do + assert {:ok, FakeString, %{}} = Registry.infer(reg, "hello") + end + + test "returns error for unmatchable value", + %{registry: reg} do + assert :error = Registry.infer(reg, {:some, :tuple}) + end + + test "user types checked before built-ins" do + reg = Registry.new([UserOverride], [FakeInteger, FakeString]) + + assert {:ok, UserOverride, %{custom: true}} = + Registry.infer(reg, 42) + end + end +end diff --git a/test/tds/type/string_test.exs b/test/tds/type/string_test.exs new file mode 100644 index 0000000..7fa7fbc --- /dev/null +++ b/test/tds/type/string_test.exs @@ -0,0 +1,370 @@ +defmodule Tds.Type.StringTest do + use ExUnit.Case, async: true + + alias Tds.Type.String, as: StrType + alias Tds.Encoding.UCS2 + + # Null collation used in parameter encoding + @null_collation <<0x00, 0x00, 0x00, 0x00, 0x00>> + + # A real collation: lcid=0x00409 (US English), col_flags=0, version=0, + # sort_id=0x34 => WINDOWS-1252 + @sample_collation <<0x09, 0x04, 0x00, 0x00, 0x34>> + + describe "type_codes/0" do + test "returns all 8 string-related type codes" do + codes = StrType.type_codes() + + # bigchar + assert 0xAF in codes + # bigvarchar + assert 0xA7 in codes + # nvarchar + assert 0xE7 in codes + # nchar + assert 0xEF in codes + # text + assert 0x23 in codes + # varchar (legacy short) + assert 0x27 in codes + # char (legacy short) + assert 0x2F in codes + # ntext + assert 0x63 in codes + assert length(codes) == 8 + end + end + + describe "type_names/0" do + test "returns :string" do + assert StrType.type_names() == [:string] + end + end + + describe "decode_metadata/1 for nvarchar (0xE7)" do + test "reads 2-byte max_length and 5-byte collation, shortlen" do + tail = <<0xAA, 0xBB>> + # max_length=100 (LE), collation=null, tail + input = + <<0xE7, 100::little-unsigned-16>> <> + @null_collation <> tail + + assert {:ok, meta, ^tail} = StrType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.encoding == :ucs2 + assert meta.length == 100 + assert meta.collation != nil + end + + test "PLP marker 0xFFFF sets data_reader to :plp" do + input = + <<0xE7, 0xFF, 0xFF>> <> + @null_collation <> <<0xCC>> + + assert {:ok, meta, <<0xCC>>} = StrType.decode_metadata(input) + assert meta.data_reader == :plp + assert meta.encoding == :ucs2 + end + end + + describe "decode_metadata/1 for nchar (0xEF)" do + test "reads metadata like nvarchar" do + input = + <<0xEF, 200::little-unsigned-16>> <> + @null_collation <> <<0xDD>> + + assert {:ok, meta, <<0xDD>>} = StrType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.encoding == :ucs2 + assert meta.length == 200 + end + end + + describe "decode_metadata/1 for bigvarchar (0xA7)" do + test "reads 2-byte max_length and 5-byte collation" do + input = + <<0xA7, 500::little-unsigned-16>> <> + @sample_collation <> <<0xEE>> + + assert {:ok, meta, <<0xEE>>} = StrType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.encoding == :single_byte + assert meta.length == 500 + assert meta.collation.codepage == "WINDOWS-1252" + end + + test "PLP marker sets :plp reader for bigvarchar" do + input = + <<0xA7, 0xFF, 0xFF>> <> + @sample_collation <> <<0xFF>> + + assert {:ok, meta, <<0xFF>>} = StrType.decode_metadata(input) + assert meta.data_reader == :plp + assert meta.encoding == :single_byte + end + end + + describe "decode_metadata/1 for bigchar (0xAF)" do + test "reads metadata like bigvarchar" do + input = + <<0xAF, 100::little-unsigned-16>> <> + @sample_collation <> <<0x11>> + + assert {:ok, meta, <<0x11>>} = StrType.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.encoding == :single_byte + end + end + + describe "decode_metadata/1 for legacy varchar (0x27)" do + test "reads 1-byte length and 5-byte collation" do + input = <<0x27, 50>> <> @sample_collation <> <<0x22>> + + assert {:ok, meta, <<0x22>>} = StrType.decode_metadata(input) + assert meta.data_reader == :bytelen + assert meta.encoding == :single_byte + assert meta.length == 50 + end + end + + describe "decode_metadata/1 for legacy char (0x2F)" do + test "reads 1-byte length and 5-byte collation" do + input = <<0x2F, 30>> <> @sample_collation <> <<0x33>> + + assert {:ok, meta, <<0x33>>} = StrType.decode_metadata(input) + assert meta.data_reader == :bytelen + assert meta.encoding == :single_byte + assert meta.length == 30 + end + end + + describe "decode_metadata/1 for text (0x23)" do + test "reads 4-byte length, collation, and table name parts" do + # length=65535, collation, numparts=1, + # table part: 4 UCS-2 chars = 8 bytes = "test" + table_name = UCS2.from_string("test") + table_size = div(byte_size(table_name), 2) + + input = + <<0x23, 65535::little-unsigned-32>> <> + @sample_collation <> + <<1::signed-8, table_size::little-unsigned-16>> <> + table_name <> <<0x44>> + + assert {:ok, meta, <<0x44>>} = StrType.decode_metadata(input) + assert meta.data_reader == :longlen + assert meta.encoding == :single_byte + end + end + + describe "decode_metadata/1 for ntext (0x63)" do + test "reads 4-byte length, collation, and table name parts" do + table_name = UCS2.from_string("tbl") + table_size = div(byte_size(table_name), 2) + + input = + <<0x63, 65535::little-unsigned-32>> <> + @null_collation <> + <<1::signed-8, table_size::little-unsigned-16>> <> + table_name <> <<0x55>> + + assert {:ok, meta, <<0x55>>} = StrType.decode_metadata(input) + assert meta.data_reader == :longlen + assert meta.encoding == :ucs2 + end + end + + describe "decode/2" do + test "nil returns nil" do + assert StrType.decode(nil, %{encoding: :ucs2}) == nil + end + + test "UCS-2 data decodes to UTF-8 string" do + ucs2_data = UCS2.from_string("Hello") + meta = %{encoding: :ucs2} + + assert StrType.decode(ucs2_data, meta) == "Hello" + end + + test "empty UCS-2 data decodes to empty string" do + assert StrType.decode(<<>>, %{encoding: :ucs2}) == "" + end + + test "UCS-2 with non-ASCII characters" do + ucs2_data = UCS2.from_string("cafe\u0301") + meta = %{encoding: :ucs2} + + result = StrType.decode(ucs2_data, meta) + assert is_binary(result) + assert String.valid?(result) + end + + test "single-byte data uses codepage from collation" do + {:ok, collation} = + Tds.Protocol.Collation.decode(@sample_collation) + + meta = %{encoding: :single_byte, collation: collation} + # ASCII chars are valid in all WINDOWS codepages + data = "Hello" + + result = StrType.decode(data, meta) + assert result == "Hello" + end + end + + describe "encode/2" do + test "nil produces nvarchar PLP null" do + {type_code, meta_bin, value_bin} = StrType.encode(nil, %{}) + + assert type_code == 0xE7 + meta = IO.iodata_to_binary(meta_bin) + # type_code + nvarchar(max): 0xFFFF + null collation + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + # PLP null: 0xFFFFFFFFFFFFFFFF + assert value == <<0xFFFFFFFFFFFFFFFF::little-unsigned-64>> + end + + test "short string encodes as nvarchar with shortlen" do + {type_code, meta_bin, value_bin} = StrType.encode("hi", %{}) + + assert type_code == 0xE7 + + ucs2 = UCS2.from_string("hi") + ucs2_size = byte_size(ucs2) + + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xE7, ucs2_size::little-unsigned-16>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + assert value == <> <> ucs2 + end + + test "empty string encodes as nvarchar(max) with PLP empty" do + {type_code, meta_bin, value_bin} = StrType.encode("", %{}) + + assert type_code == 0xE7 + + meta = IO.iodata_to_binary(meta_bin) + # type_code + nvarchar(max) header for empty string + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + # PLP empty: size=0 (8 bytes) + terminator 0x00000000 + assert value == <<0::unsigned-64, 0::unsigned-32>> + end + + test "long string (>8000 UCS-2 bytes) encodes with PLP" do + # Create a string that will be > 8000 bytes in UCS-2 + long_str = String.duplicate("A", 4001) + {type_code, meta_bin, value_bin} = StrType.encode(long_str, %{}) + + assert type_code == 0xE7 + + meta = IO.iodata_to_binary(meta_bin) + # type_code + nvarchar(max) + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + ucs2 = UCS2.from_string(long_str) + ucs2_size = byte_size(ucs2) + + # PLP format: total_size (8) + chunk_size (4) + data + terminator (4) + <> = value + assert total_size == ucs2_size + + # Should end with 0x00000000 terminator + assert :binary.part(value, byte_size(value), -4) == + <<0::little-unsigned-32>> + end + + test "string at exactly 8000 UCS-2 bytes uses shortlen" do + # 4000 chars * 2 bytes = 8000 UCS-2 bytes + str = String.duplicate("X", 4000) + {_type_code, meta_bin, _value_bin} = StrType.encode(str, %{}) + + meta = IO.iodata_to_binary(meta_bin) + ucs2_size = byte_size(UCS2.from_string(str)) + # type_code + shortlen, not PLP + assert meta == <<0xE7, ucs2_size::little-unsigned-16>> <> @null_collation + end + end + + describe "param_descriptor/2" do + test "nil returns nvarchar(1)" do + assert StrType.param_descriptor(nil, %{}) == "nvarchar(1)" + end + + test "empty string returns nvarchar(1)" do + assert StrType.param_descriptor("", %{}) == "nvarchar(1)" + end + + test "short string returns nvarchar(2000)" do + assert StrType.param_descriptor("hello", %{}) == "nvarchar(2000)" + end + + test "string over 2000 chars returns nvarchar(max)" do + long_str = String.duplicate("x", 2001) + assert StrType.param_descriptor(long_str, %{}) == "nvarchar(max)" + end + + test "string of exactly 2000 chars returns nvarchar(2000)" do + str = String.duplicate("y", 2000) + assert StrType.param_descriptor(str, %{}) == "nvarchar(2000)" + end + end + + describe "infer/1" do + test "UTF-8 string infers as string" do + assert {:ok, %{}} = StrType.infer("hello") + end + + test "empty string infers as string" do + assert {:ok, %{}} = StrType.infer("") + end + + test "integer skips" do + assert :skip = StrType.infer(42) + end + + test "atom skips" do + assert :skip = StrType.infer(:foo) + end + + test "nil skips" do + assert :skip = StrType.infer(nil) + end + + test "list skips" do + assert :skip = StrType.infer([1, 2]) + end + end + + describe "encode/decode roundtrip" do + test "short ASCII string roundtrips" do + original = "Hello, World!" + {_type, _meta, value_bin} = StrType.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # shortlen: 2-byte length prefix + UCS-2 data + <> = value + + decoded = + StrType.decode(data, %{encoding: :ucs2}) + + assert decoded == original + end + + test "unicode string roundtrips" do + original = "Bonjour le monde" + {_type, _meta, value_bin} = StrType.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + <> = value + + decoded = StrType.decode(data, %{encoding: :ucs2}) + assert decoded == original + end + end +end diff --git a/test/tds/type/udt_test.exs b/test/tds/type/udt_test.exs new file mode 100644 index 0000000..a887bb1 --- /dev/null +++ b/test/tds/type/udt_test.exs @@ -0,0 +1,201 @@ +defmodule Tds.Type.UdtTest do + use ExUnit.Case, async: true + + alias Tds.Type.Udt + + describe "type_codes/0" do + test "returns UDT type code 0xF0" do + codes = Udt.type_codes() + + assert 0xF0 in codes + assert length(codes) == 1 + end + end + + describe "type_names/0" do + test "returns :udt" do + assert Udt.type_names() == [:udt] + end + end + + # -- decode_metadata ----------------------------------------------- + + describe "decode_metadata/1 with shortlen" do + test "reads 2-byte LE max_length, shortlen reader" do + tail = <<0xAA, 0xBB>> + input = <<0xF0, 200::little-unsigned-16>> <> tail + + assert {:ok, meta, ^tail} = Udt.decode_metadata(input) + assert meta.data_reader == :shortlen + assert meta.length == 200 + end + end + + describe "decode_metadata/1 with PLP" do + test "PLP marker 0xFFFF sets data_reader to :plp" do + input = <<0xF0, 0xFF, 0xFF, 0xCC>> + + assert {:ok, meta, <<0xCC>>} = Udt.decode_metadata(input) + assert meta.data_reader == :plp + assert meta.length == 0xFFFF + end + end + + # -- decode -------------------------------------------------------- + + describe "decode/2" do + test "nil returns nil" do + assert Udt.decode(nil, %{}) == nil + end + + test "raw binary passthrough" do + data = <<0x00, 0x01, 0xFF, 0xFE, 0x80, 0x7F>> + result = Udt.decode(data, %{}) + assert result == data + end + + test "returns independent copy of the data" do + big = :crypto.strong_rand_bytes(100) + <> = big + result = Udt.decode(chunk, %{}) + assert result == chunk + assert byte_size(result) == 10 + end + + test "empty binary returns empty binary" do + assert Udt.decode(<<>>, %{}) == <<>> + end + + test "preserves arbitrary bytes including invalid UTF-8" do + data = <<0xC0, 0xC1, 0xF5, 0xFF>> + assert Udt.decode(data, %{}) == data + end + end + + # -- encode -------------------------------------------------------- + + describe "encode/2" do + test "nil produces bigvarbinary PLP null" do + {type_code, meta_bin, value_bin} = Udt.encode(nil, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<0xFFFFFFFFFFFFFFFF::little-unsigned-64>> + end + + test "short binary uses shortlen format" do + data = <<1, 2, 3, 4, 5>> + {type_code, meta_bin, value_bin} = Udt.encode(data, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 5::little-unsigned-16>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<5::little-unsigned-16, 1, 2, 3, 4, 5>> + end + + test "empty binary encodes as PLP empty" do + {type_code, meta_bin, value_bin} = Udt.encode(<<>>, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + assert value == <<0::unsigned-64, 0::unsigned-32>> + end + + test "large binary (> 8000 bytes) uses PLP format" do + data = :crypto.strong_rand_bytes(8001) + {type_code, meta_bin, value_bin} = Udt.encode(data, %{}) + + assert type_code == 0xA5 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xA5, 0xFF, 0xFF>> + + value = IO.iodata_to_binary(value_bin) + <> = value + assert total_size == 8001 + + # Ends with PLP terminator + assert :binary.part(value, byte_size(value), -4) == + <<0::little-unsigned-32>> + end + end + + # -- param_descriptor ----------------------------------------------- + + describe "param_descriptor/2" do + test "nil returns varbinary(max)" do + assert Udt.param_descriptor(nil, %{}) == "varbinary(max)" + end + + test "non-empty binary returns varbinary(max)" do + data = <<1, 2, 3>> + assert Udt.param_descriptor(data, %{}) == "varbinary(max)" + end + + test "empty binary returns varbinary(max)" do + assert Udt.param_descriptor(<<>>, %{}) == "varbinary(max)" + end + end + + # -- infer ---------------------------------------------------------- + + describe "infer/1" do + test "always returns :skip for binaries" do + assert :skip = Udt.infer(<<0xDE, 0xAD>>) + end + + test "always returns :skip for nil" do + assert :skip = Udt.infer(nil) + end + + test "always returns :skip for strings" do + assert :skip = Udt.infer("hello") + end + + test "always returns :skip for integers" do + assert :skip = Udt.infer(42) + end + end + + # -- roundtrip ------------------------------------------------------ + + describe "encode/decode roundtrip" do + test "short binary roundtrips" do + original = <<0xDE, 0xAD, 0xBE, 0xEF>> + {_type, _meta, value_bin} = Udt.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # shortlen: 2-byte length prefix + raw data + <> = value + + assert Udt.decode(data, %{}) == original + end + + test "large binary roundtrips through PLP" do + original = :crypto.strong_rand_bytes(10_000) + {_type, _meta, value_bin} = Udt.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # PLP: skip 8-byte total size, then reassemble chunks + <<_total::little-unsigned-64, chunked::binary>> = value + data = reassemble_plp(chunked) + + assert Udt.decode(data, %{}) == original + end + end + + # Helper to reassemble PLP chunks for roundtrip testing + defp reassemble_plp(<<0::little-unsigned-32, _rest::binary>>), + do: <<>> + + defp reassemble_plp(<>) do + chunk <> reassemble_plp(rest) + end +end diff --git a/test/tds/type/uuid_test.exs b/test/tds/type/uuid_test.exs new file mode 100644 index 0000000..b5abbea --- /dev/null +++ b/test/tds/type/uuid_test.exs @@ -0,0 +1,176 @@ +defmodule Tds.Type.UUIDTest do + use ExUnit.Case, async: true + + alias Tds.Type.UUID + + # Tds.Types.UUID works in mixed-endian format. Bytes are + # stored and returned without reordering to preserve existing + # roundtrip behavior with bingenerate/load/dump. + + @test_binary <<1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16>> + + @uuid_string "01020304-0506-0708-090a-0b0c0d0e0f10" + + # parse_uuid_string produces hex bytes in order (no reorder) + @parsed_string_binary <<0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, + 0x0D, 0x0E, 0x0F, 0x10>> + + describe "type_codes/0" do + test "returns uniqueidentifier code 0x24" do + assert UUID.type_codes() == [0x24] + end + end + + describe "type_names/0" do + test "returns :uuid" do + assert UUID.type_names() == [:uuid] + end + end + + # -- decode_metadata ------------------------------------------------- + + describe "decode_metadata/1" do + test "reads 1-byte length and returns bytelen reader" do + tail = <<0xAA, 0xBB>> + input = <<0x24, 0x10>> <> tail + + assert {:ok, meta, ^tail} = UUID.decode_metadata(input) + assert meta.data_reader == :bytelen + end + + test "consumes type code and length byte from stream" do + input = <<0x24, 0x10, 0xCC, 0xDD>> + + assert {:ok, _meta, <<0xCC, 0xDD>>} = + UUID.decode_metadata(input) + end + end + + # -- decode ---------------------------------------------------------- + + describe "decode/2" do + test "nil returns nil" do + assert UUID.decode(nil, %{}) == nil + end + + test "returns the raw 16-byte binary as-is" do + result = UUID.decode(@test_binary, %{}) + assert result == @test_binary + end + + test "returns independent copy of the data" do + big = @test_binary <> :crypto.strong_rand_bytes(100) + <> = big + result = UUID.decode(chunk, %{}) + assert byte_size(result) == 16 + end + end + + # -- encode ---------------------------------------------------------- + + describe "encode/2" do + test "nil produces null encoding" do + {type_code, meta_bin, value_bin} = UUID.encode(nil, %{}) + + assert type_code == 0x24 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0x24, 0x10>> + value = IO.iodata_to_binary(value_bin) + assert value == <<0x00>> + end + + test "binary UUID is sent as-is (no reorder)" do + {type_code, meta_bin, value_bin} = + UUID.encode(@test_binary, %{}) + + assert type_code == 0x24 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0x24, 0x10>> + value = IO.iodata_to_binary(value_bin) + assert value == <<0x10>> <> @test_binary + end + + test "string UUID is parsed to binary" do + {type_code, meta_bin, value_bin} = + UUID.encode(@uuid_string, %{}) + + assert type_code == 0x24 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0x24, 0x10>> + value = IO.iodata_to_binary(value_bin) + assert value == <<0x10>> <> @parsed_string_binary + end + + test "string UUID is case-insensitive" do + upper = "01020304-0506-0708-090A-0B0C0D0E0F10" + {_type, _meta, value_bin} = UUID.encode(upper, %{}) + + value = IO.iodata_to_binary(value_bin) + assert value == <<0x10>> <> @parsed_string_binary + end + end + + # -- param_descriptor ------------------------------------------------ + + describe "param_descriptor/2" do + test "returns uniqueidentifier for any value" do + assert UUID.param_descriptor(@test_binary, %{}) == + "uniqueidentifier" + + assert UUID.param_descriptor(nil, %{}) == + "uniqueidentifier" + + assert UUID.param_descriptor(@uuid_string, %{}) == + "uniqueidentifier" + end + end + + # -- infer ----------------------------------------------------------- + + describe "infer/1" do + test "16-byte binary infers as uuid" do + assert {:ok, %{}} = UUID.infer(@test_binary) + end + + test "string UUID skips (must use explicit type: :uuid)" do + assert :skip = UUID.infer(@uuid_string) + end + + test "nil skips" do + assert :skip = UUID.infer(nil) + end + + test "integer skips" do + assert :skip = UUID.infer(42) + end + + test "non-16-byte binary skips" do + assert :skip = UUID.infer(<<1, 2, 3>>) + end + + test "atom skips" do + assert :skip = UUID.infer(:foo) + end + end + + # -- roundtrip ------------------------------------------------------- + + describe "encode/decode roundtrip" do + test "encode then decode preserves input bytes" do + {_type, _meta, value_bin} = + UUID.encode(@test_binary, %{}) + + value = IO.iodata_to_binary(value_bin) + + # Strip the 1-byte length prefix to get wire bytes + <<0x10, wire::binary-16>> = value + # No reorder: wire bytes == input bytes + assert UUID.decode(wire, %{}) == @test_binary + end + + test "random 16-byte binary roundtrips through decode" do + random = :crypto.strong_rand_bytes(16) + assert UUID.decode(random, %{}) == random + end + end +end diff --git a/test/tds/type/variant_test.exs b/test/tds/type/variant_test.exs new file mode 100644 index 0000000..64ef1c3 --- /dev/null +++ b/test/tds/type/variant_test.exs @@ -0,0 +1,115 @@ +defmodule Tds.Type.VariantTest do + use ExUnit.Case, async: true + + alias Tds.Type.Variant + + describe "type_codes/0" do + test "returns variant type code 0x62" do + codes = Variant.type_codes() + + assert 0x62 in codes + assert length(codes) == 1 + end + end + + describe "type_names/0" do + test "returns :variant" do + assert Variant.type_names() == [:variant] + end + end + + # -- decode_metadata ------------------------------------------------- + + describe "decode_metadata/1" do + test "reads 4-byte LE max_length and returns variant reader" do + tail = <<0xAA, 0xBB>> + input = <<0x62, 8009::little-signed-32>> <> tail + + assert {:ok, meta, ^tail} = Variant.decode_metadata(input) + assert meta.data_reader == :variant + assert meta.length == 8009 + end + + test "handles zero max_length" do + tail = <<0xCC>> + input = <<0x62, 0::little-signed-32>> <> tail + + assert {:ok, meta, ^tail} = Variant.decode_metadata(input) + assert meta.data_reader == :variant + assert meta.length == 0 + end + end + + # -- decode ---------------------------------------------------------- + + describe "decode/2" do + test "nil returns nil" do + assert Variant.decode(nil, %{}) == nil + end + + test "binary data returns raw binary passthrough" do + data = <<0x01, 0x02, 0x03, 0x04, 0x05>> + assert Variant.decode(data, %{}) == data + end + + test "empty binary returns empty binary" do + assert Variant.decode(<<>>, %{}) == <<>> + end + + test "returns independent copy of data" do + big = :crypto.strong_rand_bytes(100) + <> = big + result = Variant.decode(chunk, %{}) + assert result == chunk + assert byte_size(result) == 20 + end + end + + # -- encode ---------------------------------------------------------- + + describe "encode/2" do + test "raises for any value (stub)" do + assert_raise RuntimeError, ~r/sql_variant/i, fn -> + Variant.encode("anything", %{}) + end + end + + test "raises for nil (stub)" do + assert_raise RuntimeError, ~r/sql_variant/i, fn -> + Variant.encode(nil, %{}) + end + end + end + + # -- param_descriptor ------------------------------------------------ + + describe "param_descriptor/2" do + test "returns sql_variant for any value" do + assert Variant.param_descriptor("any", %{}) == "sql_variant" + end + + test "returns sql_variant for nil" do + assert Variant.param_descriptor(nil, %{}) == "sql_variant" + end + end + + # -- infer ----------------------------------------------------------- + + describe "infer/1" do + test "always returns :skip for strings" do + assert :skip = Variant.infer("hello") + end + + test "always returns :skip for nil" do + assert :skip = Variant.infer(nil) + end + + test "always returns :skip for integers" do + assert :skip = Variant.infer(42) + end + + test "always returns :skip for binaries" do + assert :skip = Variant.infer(<<0xFF>>) + end + end +end diff --git a/test/tds/type/xml_test.exs b/test/tds/type/xml_test.exs new file mode 100644 index 0000000..60ca7c7 --- /dev/null +++ b/test/tds/type/xml_test.exs @@ -0,0 +1,200 @@ +defmodule Tds.Type.XmlTest do + use ExUnit.Case, async: true + + alias Tds.Type.Xml + alias Tds.Encoding.UCS2 + + @null_collation <<0x00, 0x00, 0x00, 0x00, 0x00>> + + describe "type_codes/0" do + test "returns xml type code 0xF1" do + codes = Xml.type_codes() + + assert 0xF1 in codes + assert length(codes) == 1 + end + end + + describe "type_names/0" do + test "returns :xml" do + assert Xml.type_names() == [:xml] + end + end + + describe "decode_metadata/1 without schema" do + test "reads 1 schema-presence byte and returns plp reader" do + tail = <<0xAA, 0xBB>> + input = <<0xF1, 0x00>> <> tail + + assert {:ok, meta, ^tail} = Xml.decode_metadata(input) + assert meta.data_reader == :plp + end + end + + describe "decode_metadata/1 with schema" do + test "reads schema info strings and returns plp reader" do + db_name = UCS2.from_string("mydb") + db_len = div(byte_size(db_name), 2) + + owner_name = UCS2.from_string("dbo") + owner_len = div(byte_size(owner_name), 2) + + collection_name = UCS2.from_string("MySchema") + collection_len = div(byte_size(collection_name), 2) + + tail = <<0xCC>> + + input = + <<0xF1, 0x01, db_len::unsigned-8>> <> + db_name <> + <> <> + owner_name <> + <> <> + collection_name <> + tail + + assert {:ok, meta, ^tail} = Xml.decode_metadata(input) + assert meta.data_reader == :plp + end + + test "handles empty schema strings" do + tail = <<0xDD>> + + input = + <<0xF1, 0x01, 0::unsigned-8, 0::unsigned-8, 0::little-unsigned-16>> <> tail + + assert {:ok, meta, ^tail} = Xml.decode_metadata(input) + assert meta.data_reader == :plp + end + end + + describe "decode/2" do + test "nil returns nil" do + assert Xml.decode(nil, %{}) == nil + end + + test "UCS-2 data decodes to UTF-8 string" do + ucs2_data = UCS2.from_string("hello") + assert Xml.decode(ucs2_data, %{}) == "hello" + end + + test "empty binary returns empty string" do + assert Xml.decode(<<>>, %{}) == "" + end + + test "UCS-2 with non-ASCII characters" do + ucs2_data = UCS2.from_string("cafe\u0301") + result = Xml.decode(ucs2_data, %{}) + + assert is_binary(result) + assert String.valid?(result) + assert String.contains?(result, "") + end + end + + describe "encode/2" do + test "nil produces nvarchar PLP null" do + {type_code, meta_bin, value_bin} = Xml.encode(nil, %{}) + + assert type_code == 0xE7 + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + assert value == <<0xFFFFFFFFFFFFFFFF::little-unsigned-64>> + end + + test "short XML encodes as nvarchar with shortlen" do + xml = "" + {type_code, meta_bin, value_bin} = Xml.encode(xml, %{}) + + assert type_code == 0xE7 + + ucs2 = UCS2.from_string(xml) + ucs2_size = byte_size(ucs2) + + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xE7, ucs2_size::little-unsigned-16>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + assert value == <> <> ucs2 + end + + test "empty string encodes as nvarchar(max) with PLP empty" do + {type_code, meta_bin, value_bin} = Xml.encode("", %{}) + + assert type_code == 0xE7 + + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + assert value == <<0::unsigned-64, 0::unsigned-32>> + end + + test "large XML (>8000 UCS-2 bytes) encodes with PLP" do + xml = + "" <> String.duplicate("A", 4001) <> "" + + {type_code, meta_bin, value_bin} = Xml.encode(xml, %{}) + + assert type_code == 0xE7 + + meta = IO.iodata_to_binary(meta_bin) + assert meta == <<0xE7, 0xFF, 0xFF>> <> @null_collation + + value = IO.iodata_to_binary(value_bin) + ucs2 = UCS2.from_string(xml) + ucs2_size = byte_size(ucs2) + + <> = value + assert total_size == ucs2_size + + # Ends with PLP terminator + assert :binary.part(value, byte_size(value), -4) == + <<0::little-unsigned-32>> + end + end + + describe "param_descriptor/2" do + test "nil returns xml" do + assert Xml.param_descriptor(nil, %{}) == "xml" + end + + test "any XML string returns xml" do + assert Xml.param_descriptor("", %{}) == "xml" + end + + test "empty string returns xml" do + assert Xml.param_descriptor("", %{}) == "xml" + end + end + + describe "infer/1" do + test "always returns :skip for strings" do + assert :skip = Xml.infer("") + end + + test "always returns :skip for nil" do + assert :skip = Xml.infer(nil) + end + + test "always returns :skip for integers" do + assert :skip = Xml.infer(42) + end + end + + describe "encode/decode roundtrip" do + test "short XML roundtrips" do + original = "data" + {_type, _meta, value_bin} = Xml.encode(original, %{}) + value = IO.iodata_to_binary(value_bin) + + # shortlen: 2-byte length prefix + UCS-2 data + <> = value + + decoded = Xml.decode(data, %{}) + assert decoded == original + end + end +end diff --git a/test/tds/type_integration_test.exs b/test/tds/type_integration_test.exs new file mode 100644 index 0000000..01c29c0 --- /dev/null +++ b/test/tds/type_integration_test.exs @@ -0,0 +1,367 @@ +defmodule Tds.TypeIntegrationTest do + @moduledoc """ + End-to-end tests for the new type system pipeline: + Registry -> handler.decode_metadata -> DataReader.read -> handler.decode + + These tests craft binary payloads that match real TDS COLMETADATA + + ROW token sequences and verify that Tds.Tokens.decode_tokens/2 + produces the correct Elixir values. + """ + + use ExUnit.Case, async: true + + import Tds.Protocol.Constants + + alias Tds.Type.{DataReader, Registry} + + # -- Helper: build a minimal COLMETADATA + ROW + DONE stream -------- + + # Builds a binary token stream with one column and one row. + # type_meta_bin is the raw type metadata (starts with type code byte). + # value_bin is the raw row value data (with length prefix per reader). + defp single_column_stream(type_meta_bin, value_bin, name \\ "c") do + col_name_ucs2 = :unicode.characters_to_binary(name, :utf8, {:utf16, :little}) + name_len = div(byte_size(col_name_ucs2), 2) + + # column_count = 1 + # usertype (4 bytes) + flags (2 bytes) + colmetadata_body = + <<0x01, 0x00>> <> + <<0x00, 0x00, 0x00, 0x00, 0x00, 0x20>> <> + type_meta_bin <> + <> <> + col_name_ucs2 + + # ROW token (0xD1) + value_bin + row_body = value_bin + + # DONE token (0xFD) + 12 bytes + done_body = + <<0x10, 0x00, 0xC1, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + + <> <> + colmetadata_body <> + <> <> + row_body <> + <> <> + done_body + end + + # -- Registry + handler.decode_metadata pipeline tests --------------- + + describe "decode pipeline: integer column" do + test "fixed int (0x38) produces correct value" do + # Type metadata: int (0x38) — fixed, no extra bytes + type_meta = <> + # Row value: 4 bytes LE = 42 + value = <<42, 0, 0, 0>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.Integer}], + row: [42], + done: _ + ] = tokens + end + + test "intn (0x26) with bytelen reader and NULL" do + # Type metadata: intn (0x26), length=4 + type_meta = <> + # Row value: bytelen NULL (0x00) + value = <<0x00>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.Integer}], + row: [nil], + done: _ + ] = tokens + end + + test "intn (0x26) with 4-byte value" do + # Type metadata: intn (0x26), length=4 + type_meta = <> + # Row value: bytelen size=4, value=100 + value = <<0x04, 100, 0, 0, 0>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: _, + row: [100], + done: _ + ] = tokens + end + + test "bigint (0x7F) fixed 8-byte value" do + type_meta = <> + value = <<1, 0, 0, 0, 0, 0, 0, 0>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: _, + row: [1], + done: _ + ] = tokens + end + end + + describe "decode pipeline: string column" do + test "bigvarchar (0xA7) shortlen with ASCII data" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + type_meta = <> <> collation + # shortlen: 2-byte LE length + data + value = <<0x03, 0x00, "foo">> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.String}], + row: ["foo"], + done: _ + ] = tokens + end + + test "nvarchar (0xE7) shortlen with UCS-2 data" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + # max_length = 8000 (0x401F -> no, use 200 = 0xC8, 0x00) + type_meta = <> <> collation + ucs2 = :unicode.characters_to_binary("hello", :utf8, {:utf16, :little}) + ucs2_len = byte_size(ucs2) + value = <> <> ucs2 + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.String, encoding: :ucs2}], + row: ["hello"], + done: _ + ] = tokens + end + + test "nvarchar (0xE7) PLP with UCS-2 data" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + # max_length = 0xFFFF means PLP + type_meta = <> <> collation + ucs2 = :unicode.characters_to_binary("test", :utf8, {:utf16, :little}) + ucs2_len = byte_size(ucs2) + + # PLP: 8-byte total size + chunk (4-byte chunk_size + data) + terminator + value = + <> <> + <> <> + ucs2 <> + <<0, 0, 0, 0>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.String, data_reader: :plp}], + row: ["test"], + done: _ + ] = tokens + end + + test "nvarchar PLP NULL" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + type_meta = <> <> collation + # PLP NULL marker: 8 bytes of 0xFF + value = <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: _, + row: [nil], + done: _ + ] = tokens + end + + test "shortlen NULL" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + type_meta = <> <> collation + # shortlen NULL: 0xFFFF + value = <<0xFF, 0xFF>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: _, + row: [nil], + done: _ + ] = tokens + end + end + + describe "decode pipeline: multiple columns" do + test "int + nvarchar in same row" do + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + ucs2 = :unicode.characters_to_binary("hi", :utf8, {:utf16, :little}) + ucs2_len = byte_size(ucs2) + + col1_name = :unicode.characters_to_binary("id", :utf8, {:utf16, :little}) + col2_name = :unicode.characters_to_binary("name", :utf8, {:utf16, :little}) + + # Column 1: int + # Column 2: nvarchar shortlen + colmetadata_body = + <<0x02, 0x00>> <> + <<0x00, 0x00, 0x00, 0x00, 0x00, 0x20>> <> + <> <> + <> <> + col1_name <> + <<0x00, 0x00, 0x00, 0x00, 0x00, 0x20>> <> + <> <> + collation <> + <> <> col2_name + + # Row values: int 7 + nvarchar "hi" + row_body = + <<7, 0, 0, 0>> <> + <> <> ucs2 + + done_body = + <<0x10, 0x00, 0xC1, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00>> + + stream = + <> <> + colmetadata_body <> + <> <> + row_body <> + <> <> + done_body + + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [ + %{name: "id", handler: Tds.Type.Integer}, + %{name: "name", handler: Tds.Type.String} + ], + row: [7, "hi"], + done: _ + ] = tokens + end + end + + describe "DataReader + handler.decode unit pipeline" do + test "integer via bytelen reader" do + handler = Tds.Type.Integer + {:ok, meta, <<>>} = handler.decode_metadata(<>) + + # Non-null: bytelen prefix 4, then 4 bytes LE = 99 + {raw, <<>>} = DataReader.read(meta.data_reader, <<0x04, 99, 0, 0, 0>>) + assert handler.decode(raw, meta) == 99 + + # NULL: bytelen prefix 0x00 + {nil_raw, <<>>} = DataReader.read(meta.data_reader, <<0x00>>) + assert handler.decode(nil_raw, meta) == nil + end + + test "string via shortlen reader" do + handler = Tds.Type.String + collation = <<0x09, 0x04, 0xD0, 0x00, 0x34>> + + {:ok, meta, <<>>} = + handler.decode_metadata(<> <> collation) + + ucs2 = :unicode.characters_to_binary("abc", :utf8, {:utf16, :little}) + ucs2_len = byte_size(ucs2) + {raw, <<>>} = DataReader.read(meta.data_reader, <> <> ucs2) + assert handler.decode(raw, meta) == "abc" + end + + test "boolean via fixed reader" do + handler = Tds.Type.Boolean + {:ok, meta, <<>>} = handler.decode_metadata(<>) + + {raw, <<>>} = DataReader.read(meta.data_reader, <<0x01>>) + assert handler.decode(raw, meta) == true + + {raw, <<>>} = DataReader.read(meta.data_reader, <<0x00>>) + assert handler.decode(raw, meta) == false + end + + test "registry handler lookup" do + reg = Registry.new() + + assert {:ok, Tds.Type.Integer} = + Registry.handler_for_code(reg, tds_type(:int)) + + assert {:ok, Tds.Type.String} = + Registry.handler_for_code(reg, tds_type(:nvarchar)) + + assert {:ok, Tds.Type.Boolean} = + Registry.handler_for_code(reg, tds_type(:bit)) + + assert {:ok, Tds.Type.DateTime} = + Registry.handler_for_code(reg, tds_type(:daten)) + + assert {:ok, Tds.Type.Decimal} = + Registry.handler_for_code(reg, tds_type(:decimaln)) + end + end + + describe "decode pipeline: boolean column" do + test "fixed bit (0x32) true" do + type_meta = <> + value = <<0x01>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.Boolean}], + row: [true], + done: _ + ] = tokens + end + + test "fixed bit (0x32) false" do + type_meta = <> + value = <<0x00>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: _, + row: [false], + done: _ + ] = tokens + end + end + + describe "decode pipeline: decimal column" do + test "decimaln (0x6A) with precision 10, scale 2" do + # decimaln: type_code, length, precision, scale + type_meta = <> + # bytelen: size=5, sign=1 (positive), value=12345 LE + # 12345 = 0x3039 -> <<0x39, 0x30, 0x00, 0x00>> + value = <<0x05, 0x01, 0x39, 0x30, 0x00, 0x00>> + + stream = single_column_stream(type_meta, value) + tokens = Tds.Tokens.decode_tokens(stream) + + assert [ + colmetadata: [%{handler: Tds.Type.Decimal}], + row: [dec], + done: _ + ] = tokens + + assert Decimal.equal?(dec, Decimal.new("123.45")) + end + end +end diff --git a/test/tds/type_test.exs b/test/tds/type_test.exs new file mode 100644 index 0000000..33dbd3a --- /dev/null +++ b/test/tds/type_test.exs @@ -0,0 +1,67 @@ +defmodule Tds.TypeTest do + use ExUnit.Case, async: true + + defmodule MockHandler do + @behaviour Tds.Type + + @impl true + def type_codes, do: [0xFF] + + @impl true + def type_names, do: [:mock] + + @impl true + def decode_metadata(<>), + do: {:ok, %{data_reader: :bytelen, length: 1}, rest} + + @impl true + def decode(nil, _meta), do: nil + def decode(<>, _meta), do: val + + @impl true + def encode(nil, _meta), do: {0xFF, <<0xFF, 0x00>>, <<0x00>>} + def encode(val, _meta), do: {0xFF, <<0xFF, 0x01>>, <<0x01, val>>} + + @impl true + def param_descriptor(_value, _meta), do: "mock" + + @impl true + def infer(val) when is_integer(val) and val in 0..255, + do: {:ok, %{}} + + def infer(_), do: :skip + end + + describe "behaviour contract" do + test "mock handler compiles and implements all callbacks" do + assert MockHandler.type_codes() == [0xFF] + assert MockHandler.type_names() == [:mock] + end + + test "decode_metadata returns ok tuple with metadata and rest" do + assert {:ok, %{data_reader: :bytelen}, <<0xAA>>} = + MockHandler.decode_metadata(<<0xAA>>) + end + + test "decode nil returns nil" do + assert MockHandler.decode(nil, %{}) == nil + end + + test "decode binary returns value" do + assert MockHandler.decode(<<42>>, %{}) == 42 + end + + test "encode returns {type_code, meta_bin, value_bin}" do + {0xFF, _meta, _val} = MockHandler.encode(42, %{}) + end + + test "param_descriptor returns string" do + assert MockHandler.param_descriptor(42, %{}) == "mock" + end + + test "infer returns ok or skip" do + assert {:ok, %{}} = MockHandler.infer(42) + assert :skip = MockHandler.infer("not a byte") + end + end +end diff --git a/test/types/types_test.exs b/test/types/types_test.exs index 100ee1b..9b413ed 100644 --- a/test/types/types_test.exs +++ b/test/types/types_test.exs @@ -7,8 +7,6 @@ defmodule Tds.TypesTest do require Logger - @tds_data_type_decimaln 0x6A - setup do {:ok, pid} = Tds.start_link(opts()) @@ -45,93 +43,68 @@ defmodule Tds.TypesTest do value end - describe "encode_data/3" do + describe "Decimal handler encode" do test "encodes decimal type", _context do value = Decimal.new("1000") - attr = [precision: 8, scale: 4] - - # assert <<5, 1, 128, 150, 152, 0>> = - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <> = IO.iodata_to_binary(val) assert byte_len == 5 assert sign == 1 - assert <<232, 3, 0, 0>> = value_binary - assert :binary.decode_unsigned(value_binary, :little) == 1000 + coef = :binary.decode_unsigned(value_binary, :little) + assert coef == 1000 end - test "encodes decimal type with scientific notation", _context do + test "encodes decimal with scientific notation", _context do value = Decimal.new("1E+3") - attr = [precision: 8, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert byte_len == 5 assert sign == 1 - assert <<232, 3, 0, 0>> = value_binary assert :binary.decode_unsigned(value_binary, :little) == 1000 end - # Decimal.new("-1E+3") - test "encodes negative decimal with scientific notation", _context do + test "encodes negative decimal with scientific notation", + _context do value = Decimal.new("-1E+3") - attr = [precision: 8, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert byte_len == 5 assert sign == 0 - assert <<232, 3, 0, 0>> = value_binary assert :binary.decode_unsigned(value_binary, :little) == 1000 end - test "encodes decimal type with precision", _context do + test "encodes decimal with fractional part", _context do value = Decimal.new("1000.0000") - attr = [precision: 8, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert byte_len == 5 assert sign == 1 - assert <<128, 150, 152, 0>> = value_binary assert :binary.decode_unsigned(value_binary, :little) == 10_000_000 end test "encodes negative decimal", _context do value = Decimal.new("-1000.0000") - attr = [precision: 8, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert byte_len == 5 assert sign == 0 - assert <<128, 150, 152, 0>> = value_binary assert :binary.decode_unsigned(value_binary, :little) == 10_000_000 end - test "encodes decimal type for 1000.1234", _context do + test "encodes decimal 1000.1234", _context do value = Decimal.new("1000.1234") - attr = [precision: 8, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert byte_len == 5 assert sign == 1 - assert <<82, 155, 152, 0>> = value_binary assert :binary.decode_unsigned(value_binary, :little) == 10_001_234 end test "encodes very large decimal", _context do value = Decimal.new("9999999999.9999") - attr = [precision: 14, scale: 4] - - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <> = IO.iodata_to_binary(val) assert byte_len == 9 assert sign == 1 @@ -140,12 +113,9 @@ defmodule Tds.TypesTest do test "encodes very small decimal", _context do value = Decimal.new("0.0001") - attr = [precision: 5, scale: 4] + {_type, _meta, val} = Tds.Type.Decimal.encode(value, %{}) + <<_byte_len, sign, value_binary::binary>> = IO.iodata_to_binary(val) - assert <> <> <> <> value_binary = - Tds.Types.encode_data(@tds_data_type_decimaln, value, attr) - - assert byte_len == 5 assert sign == 1 assert :binary.decode_unsigned(value_binary, :little) == 1 end @@ -223,9 +193,9 @@ defmodule Tds.TypesTest do @tag precision: 38, scale: 18, capture_log: true test "raises an error with value larger than SQL Server maximum", context do - # 39 digits + # 40 digits total (21 integer + 19 fractional) value = Decimal.new("999999999999999999999.9999999999999999999") - message = ~r/size \(39\) given to the type 'decimal' exceeds the maximum allowed \(38\)/ + message = ~r/size \(40\) given to the type 'decimal' exceeds the maximum allowed \(38\)/ assert_raise(MatchError, message, fn -> insert_decimal(value, context) end) end