Modern SQL: While you were looking elsewhere
Markus Winand https://winand.at • https://modern-sql.com • https://use-the-index-luke.com
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Old
While SQL has turned 50 this year, it is by no means the same as 50 years ago. The latest update was mid 2023.
Relational
Depends on how you understand relational: It is not about atomicity. It is fine to have JSON, XML and even arrays in single fields. Yet, in the persistent model simpler types have their merits.
Query Language
It is rather a query and transformation language. A pure query language (XPath, CSS selectors) just picks items out of something. SQL can do so much more — it is actually Turning complete.
Multi-paradigm
Early SQL focused on relational concepts. In 1999, SQL broke free from the relational cage: Recursion, window-functions, composite types, XML/JSON, etc.
Domain Specific
While SQL is Turing complete, it is not a general purpose language. SQL complements other languages to manage [*] data more easily.
[*]: large amounts of, shared, persistent, etc.
Extensible Industry Standard
Since 1987, SQL is defined in international standards: ISO/IEC 9075-1 to -16. They define syntax and the semantics of statements using this syntax. They explicitly allow SQL implementations to accept non-standard syntax.
Multi-paradigm
Early SQL focused on relational concepts. In 1999, SQL broke free from the relational cage: Recursion, window-functions, composite types, XML/JSON, etc.
Domain Specific
While SQL is Turing complete, it is not a general purpose language. SQL complements other languages to manage [*] data more easily.
[*]: large amounts of, shared, persistent, etc.
Extensible Industry Standard
Since 1987, SQL is defined in international standards: ISO/IEC 9075-1 to -16. They define syntax and the semantics of statements using this syntax. They explicitly allow SQL implementations to accept non-standard syntax.
Multi-paradigm
Early SQL focused on relational concepts. In 1999, SQL broke free from the relational cage: Recursion, window-functions, composite types, XML/JSON, etc.
Domain Specific
While SQL is Turing complete, it is not a general purpose language. SQL complements other languages to manage [*] data more easily.
[*]: large amounts of, shared, persistent, etc.
Extensible Industry Standard
Since 1987, SQL is defined in international standards: ISO/IEC 9075-1 to -16. They define syntax and the semantics of statements using this syntax. They explicitly allow SQL implementations to accept non-standard syntax.
Multi-paradigm
Early SQL focused on relational concepts. In 1999, SQL broke free from the relational cage: Recursion, window-functions, composite types, XML/JSON, etc.
Domain Specific
While SQL is Turing complete, it is not a general purpose language. SQL complements other languages to manage [*] data more easily.
[*]: large amounts of, shared, persistent, etc.
Extensible Industry Standard
Since 1987, SQL is defined in international standards: ISO/IEC 9075-1 to -16. They define syntax and the semantics of statements using this syntax. They explicitly allow SQL implementations to accept non-standard syntax.
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t ( a INTEGER ARRAY[1000], -- maximum length x XML, j JSON )
[*] SQLite being the famous exception (but see STRICT
tables).
Declarative
SQL is generally referred to as being declarative…
…but it also embraces ideas from the functional programming paradigm
without enforcing its limitations. Functions may have side effects:
NON DETERMINISTIC
• MODIFIES SQL DATA
Strictly Typed
Although SQL is strictly typed[*] it has some flexible types:
CREATE TABLE t (
a INTEGER ARRAY[1000], -- maximum length
x XML,
j JSON
)
[*] SQLite being the famous exception (but see STRICT
tables).
After all,
After all,
SQL is very pragmatic
After all,
SQL is very pragmatic
While the relational model comes with rigid rules for doing it right, SQL offers many options to choose from and doesn’t judge on your choice. SQL broke free from the relational cage — in 1999.
Latest News Let’s start with the non-exciting stuff
0b101
, 0o567
, 0xBEEF
SELECT …
FROM …
WHERE id = 0b101010 -- binary literal
0b101
, 0o567
, 0xBEEF
SELECT …
FROM …
WHERE id = 0b101010 -- binary literal
SELECT …
FROM …
WHERE id = 0o52 -- octal literal
0b101
, 0o567
, 0xBEEF
SELECT …
FROM …
WHERE id = 0b101010 -- binary literal
SELECT …
FROM …
WHERE id = 0o52 -- octal literal
SELECT …
FROM …
WHERE id = 0x2A -- hexadecimal literal
0b101
, 0o567
, 0xBEEF
SELECT …
FROM …
WHERE id = 0b101010 -- binary literal
SELECT …
FROM …
WHERE id = 0o52 -- octal literal
SELECT …
FROM …
WHERE id = 0x2A -- hexadecimal literal
SELECT …
FROM …
WHERE id = 42 -- decimal
0b101
, 0o567
, 0xBEEF
: Availability in Various Dialectscast
1_000_000
SELECT … FROM … WHERE id = 1_000_000
1_000_000
: Availability in Various Dialectscast
cast
VARCHAR
: Optional Maximum LengthCREATE TABLE … (
x VARCHAR(/* what should I put here? */)
)
VARCHAR
: Optional Maximum LengthCREATE TABLE … (
x VARCHAR(/* what should I put here? */)
)
CREATE TABLE … (
x VARCHAR -- Since SQL:2023, it's OK to skip it
)
VARCHAR
without length: Availability in Various Dialectsvarchar(max)
as reasonably long length limitCREATE TABLE … (
x BIGINT UNIQUE -- NULLs are allowed
)
INSERT INTO … (x) VALUES (NULL) -- First NULL, no doubt
INSERT INTO … (x) VALUES (NULL) -- Should it work?
-- According to my understanding
-- of the standard before 2023
-- YES!
CREATE TABLE … (
x BIGINT UNIQUE -- NULLs are allowed
)
INSERT INTO … (x) VALUES (NULL) -- First NULL, no doubt
INSERT INTO … (x) VALUES (NULL) -- Should it work?
-- According to my understanding
-- of the standard before 2023
-- YES!
CREATE TABLE … (
x BIGINT UNIQUE -- NULLs are allowed
)
INSERT INTO … (x) VALUES (NULL) -- First NULL, no doubt
INSERT INTO … (x) VALUES (NULL) -- Should it work?
-- According to my understanding
-- of the standard before 2023
-- YES!
CREATE TABLE … ( x BIGINT UNIQUE NULLS DISTINCT )
INSERT INTO … (x) VALUES (NULL) -- first one works anyway
INSERT INTO … (x) VALUES (NULL) -- Works if NULLS DISTINCT -- Rejected if NULLS NOT DISTINCT -- Default: “Implementation defined”
CREATE TABLE … ( x BIGINT UNIQUE NULLS NOT DISTINCT )
INSERT INTO … (x) VALUES (NULL) -- first one works anyway
INSERT INTO … (x) VALUES (NULL) -- Works if NULLS DISTINCT -- Rejected if NULLS NOT DISTINCT -- Default: “Implementation defined”
CREATE TABLE … ( x BIGINT UNIQUE )
INSERT INTO … (x) VALUES (NULL) -- first one works anyway
INSERT INTO … (x) VALUES (NULL) -- Works if NULLS DISTINCT -- Rejected if NULLS NOT DISTINCT -- Default: “Implementation defined”
NULLS [NOT] DISTINCT
: Availability in Various DialectsNULLS FIRST|LAST
: How to sort NULL
values?CREATE TABLE … (
x BIGINT -- A NULL-able column
)
SELECT x FROM … ORDER BY x -- Where should the NULLs appear? -- It is “Implementation defined” -- Eighter before non-NULL values -- or after them, but not inbetween
NULLS FIRST|LAST
: How to sort NULL
values?CREATE TABLE … (
x BIGINT -- A NULL-able column
)
SELECT x FROM … ORDER BY x -- Where should the NULLs appear? -- It is “Implementation defined” -- Eighter before non-NULL values -- or after them, but not inbetween
NULLS FIRST|LAST
: How to sort NULL
values?CREATE TABLE … (
x BIGINT -- A NULL-able column
)
SELECT x
FROM …
ORDER BY x [ASC|DESC] NULLS FIRST
-- Since 2003, the NULLS FIRST|LAST
-- option is standardized
NULLS FIRST|LAST
: Availability in Various DialectsGREATEST
/ LEAST
: Non-aggregate min
/max
SELECT GREATEST(a, b, c) , LEAST(a, b, c) FROM …
The SQL functions greatest
and least
take one or more arguments and return the largest or smallest value if none of the arguments is null
.
GREATEST
/ LEAST
: Availability in Various DialectsANY_VALUE
: When it does not matterSELECT g
, x -- selecting non-grouping column is not allowed
FROM …
GROUP BY g
ANY_VALUE
: When it does not matterSELECT g
, x -- selecting non-grouping column is not allowed
FROM …
GROUP BY g
SELECT g
, ANY_VALUE(x) -- just take any non-null value
FROM …
GROUP BY g
You could always use min
or max
, but they might impose unnecessary overhead.
ANY_VALUE
: Availability in Various Dialectsnull
even if there are non-null
valuesfilter
clauseover
clause)JSON It’s not about persistence
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS () ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS () ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS () ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS () ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS ( /* after specifying the rows to produce, let's declare the columns */ ) ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, --[column name]---^^^^^^^ ) ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, ----------[column type]---^^^ ) ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, ----------[Path: Which JSON item]---^^^^^^^^^ ) ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, ----[What if the path doesn't hit anything?]---^^^^^^^^^^^^^^) ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, qty INT PATH '$.qty' DEFAULT 1 ON EMPTY) -------------[If "qty" is missing, use 1]--^^^^^^^^^^^^^^^^^^ ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, qty INT PATH '$.qty' DEFAULT 1 ON EMPTY) ERROR ON ERROR -- remaining errors abort ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
[ {"product": 42}, {"product": 123, "qty": 2} ]
INSERT INTO order_lines (product, qty) -- directly load into table SELECT product, qty FROM JSON_TABLE(?, -- bind parameter for the JSON document '$[*]' -- SQL JSON/Path: one row per hit COLUMNS (product INT PATH '$.product' ERROR ON EMPTY, qty INT PATH '$.qty' DEFAULT 1 ON EMPTY) ERROR ON ERROR -- remaining errors abort ) t
| product | qty | +---------+-----+ | 42 | 1 | | 123 | 2 |
json_table
(or similar): Availability in Various Dialectsjsonb_to_recordset
json_each
and json_extract
openjson
json_query_array
, unnest
and json_value
SQL is an…
SQL is an…
Extensible Industry Standard
SQL is an…
Extensible Industry Standard
The Oracle Database can do the mapping automatically:
CREATE JSON DUALITY VIEW v AS SELECT JSON {'_id': {'cid': cid, 'ts': ts}, 'items': [ SELECT JSON {'product': product ,'qty' : qty } FROM order_items WITH INSERT UPDATE DELETE WHERE ol.cid=o.cid AND ol.ts=o.ts ] } FROM orders WITH INSERT UPDATE DELETE
CREATE JSON DUALITY VIEW v AS SELECT JSON {'_id': {'cid': cid, 'ts': ts}, 'items': [ SELECT JSON {'product': product ,'qty' : qty } FROM order_items WITH INSERT UPDATE DELETE WHERE ol.cid=o.cid AND ol.ts=o.ts ] } FROM orders WITH INSERT UPDATE DELETE
INSERT INTO v VALUES ('{"_id":{"cid": 1} ,"items": [{"product": 42} ,{"product": 123, "qty":2} ] }')
INSERT INTO v VALUES ('{"_id":{"cid": 1} ,"items": [{"product": 42} ,{"product": 123, "qty":2} ] }')
SELECT * FROM orders | cid | ts | +-----+-----+ | 1 | ... | | 1 | ... |
SELECT * FROM order_items | cid | ts | product | qty | +-----+-----+---------+-----+ | 1 | ... | 42 | 1 | | 1 | ... | 123 | 2 |
JSON
duality views: Availability in Various DialectsA very common problem:
IN
lists of dynamic length
A very common problem:
IN
lists of dynamic length
ORA-01795: maximum number of expressions in a list is 1000
A very common problem:
IN
lists of dynamic length
SELECT tbl.i
FROM tbl
WHERE i IN (SELECT x -- vvvvv--[in practice: bind param]
FROM JSON_TABLE('[1,3]'
,'$[*]'
COLUMNS (x INTEGER PATH '$')
ERROR ON ERROR
) jt
)
A very common problem:
IN
lists of dynamic length
SELECT tbl.i
FROM tbl
WHERE i IN (SELECT x -- vvvvv--[in practice: bind param]
FROM JSON_TABLE('[1,3]'
,'$[*]'
COLUMNS (x INTEGER PATH '$')
ERROR ON ERROR
) jt
)
PostgreSQL: SELECT i::INT FROM json_array_elements_text(?) t(i) SQLite: SELECT value FROM json_each(?) SQL Server: SELECT value FROM openjson(?)
Another common problem: Dynamic pivot
| year | revenue | +------+---------+ | 2024 | ... | | 2023 | ... | | ... | ... |
Another common problem: Dynamic pivot
As SQL is statically typed: columns must be declared upfront.
| year | revenue | +------+---------+ | 2024 | ... | | 2023 | ... | | ... | ... |
Another common problem: Dynamic pivot
As SQL is statically typed: columns must be declared upfront.
SELECT SUM(revenue) FILTER(WHERE year = 2024) AS revenue_2024 , SUM(revenue) FILTER(WHERE year = 2023) AS revenue_2023 , … FROM sales GROUP BY year
| year | revenue | +------+---------+ | 2024 | ... | | 2023 | ... | | ... | ... |
Another common problem: Dynamic pivot
As SQL is statically typed: columns must be declared upfront.
SELECT SUM(revenue) FILTER(WHERE year = 2024) AS revenue_2024 , SUM(revenue) FILTER(WHERE year = 2023) AS revenue_2023 , … FROM sales GROUP BY year
But you can also put everything in a single column:
SELECT JSON_OBJECTAGG(year VALUE revenue) FROM (SELECT year, SUM(revenue) revenue FROM sales GROUP BY year ) t
| year | revenue | +------+---------+ | 2024 | ... | | 2023 | ... | | ... | ... |
Another common problem: Dynamic pivot
As SQL is statically typed: columns must be declared upfront.
SELECT SUM(revenue) FILTER(WHERE year = 2024) AS revenue_2024 , SUM(revenue) FILTER(WHERE year = 2023) AS revenue_2023 , … FROM sales GROUP BY year
But you can also put everything in a single column:
SELECT JSON_OBJECTAGG(year VALUE revenue) FROM (SELECT year, SUM(revenue) revenue FROM sales GROUP BY year ) t
{"2024": ...
,"2023": ...
...
}
Another common problem: Dynamic pivot
As SQL is statically typed: columns must be declared upfront.
SELECT SUM(revenue) FILTER(WHERE year = 2024) AS revenue_2024 , SUM(revenue) FILTER(WHERE year = 2023) AS revenue_2023 , … FROM sales GROUP BY year
But you can also put everything in a single column:
SELECT JSON_OBJECTAGG(year VALUE revenue) FROM (SELECT year, SUM(revenue) revenue FROM sales GROUP BY year ) t
{"2024": ... ,"2023": ... ... }
json_objectagg
(or similar): Availability in Various Dialectsjson_group_object
JSON_OBJECT
and ARRAY_AGG
What do these examples have in common?
json_table
• JSON duality views • IN
lists • Dynamic pivot
What do these examples have in common?
json_table
• JSON duality views • IN
lists • Dynamic pivot
They don’t store JSON in the database They just use it as an interface
SQL’s JSON support in not so interesting because it allows us to store JSON in tables. It is so interesting to because it allows us to process JSON in SQL.
Recursion Non-trivial queries need logic
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t WHERE t.parent = 42 ON t.parent = prev.id ) SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t WHERE t.parent = 42 ON t.parent = prev.id ) SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t WHERE t.parent = 42 ON t.parent = prev.id ) SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t WHERE t.parent = 42 ON t.parent = prev.id ) SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t WHERE t.parent = 42 ON t.parent = prev.id ) SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS (
SELECT t.id, t.parent
FROM t
WHERE t.id = 42
UNION ALL
SELECT t.id, t.parent
FROM t
WHERE t.parent = 42
ON t.parent = prev.id
)
SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS (
SELECT t.id, t.parent
FROM t
WHERE t.id = 42
UNION ALL
SELECT t.id, t.parent
FROM t
/* WHERE t.parent = 42 */
JOIN prev
ON t.parent = prev.id
)
SELECT * FROM prev
WITH RECURSIVE prev (id, parent) AS ( SELECT t.id, t.parent FROM t WHERE t.id = 42 UNION ALL SELECT t.id, t.parent FROM t /* WHERE t.parent = 42 */ JOIN prev /* ON t.parent = prev.id */ ON t.id = prev.parent ) SELECT * FROM prev
WITH [RECURSIVE]
: Availability in Various DialectsGraphs Common Requirements Justify Dedicated Syntax
CREATE PROPERTY GRAPH grph VERTEX TABLES ( t AS item KEY (id) ) EDGE TABLES ( t AS parent KEY (id) SOURCE KEY (id) REFERENCES item (id) DESTINATION KEY (parent) REFERENCES item (id) )
SELECT * FROM GRAPH_TABLE (grph MATCH (i0) <-[]- {,10} (ix) WHERE i0.id = 42 COLUMNS (ix.*) ) gt ORDER BY id
CREATE PROPERTY GRAPH grph VERTEX TABLES ( t AS item KEY (id) ) EDGE TABLES ( t AS parent KEY (id) SOURCE KEY (id) REFERENCES item (id) DESTINATION KEY (parent) REFERENCES item (id) )
SELECT * FROM GRAPH_TABLE (grph MATCH (i0) <-[]- {,10} (ix) WHERE i0.id = 42 COLUMNS (ix.*) ) gt ORDER BY id
CREATE PROPERTY GRAPH grph VERTEX TABLES ( t AS item KEY (id) ) EDGE TABLES ( t AS parent KEY (id) SOURCE KEY (id) REFERENCES item (id) DESTINATION KEY (parent) REFERENCES item (id) )
SELECT *
FROM GRAPH_TABLE (grph
/* MATCH (i0) <-[]- {,10} (ix) */
MATCH (i0) -[]->{,10} (ix)
WHERE i0.id = 42
COLUMNS (ix.*)
) gt
ORDER BY id
A lot has happened since SQL-92
A lot has happened since SQL-92
SQL has evolved beyond the relational idea
A lot has happened since SQL-92
SQL has evolved beyond the relational idea
If you use SQL for CRUD operations only, you are doing it wrong
A lot has happened since SQL-92
SQL has evolved beyond the relational idea
If you use SQL for CRUD operations only, you are doing it wrong
https://modern-sql.com/ https://winand.at/