SlideShare uma empresa Scribd logo
1 de 154
Slick v.s. Quill Smackdown
Alexander Ioffe
Introduction
v.s.
• 8 yrs old, 38K Lines
• Embedded DSL (EDSL)
• Supported Databases:
SQLServer,
Oracle,
DB2,
MySQL,
PostgreSQL,
SQLite,
Derby,
HSQLDB,
H2
• 2 yrs old, 34k lines
• Quoted DSL (QDSL)
• Supported Databases:
SQLServer
MySQL,
PostgreSQL,
SQLite,
H2,
Spark,
Cassandra,
OrientDB
• Runtime Queries (default),
optional API for Compile-Time
• APIs for Async, Streaming,
Sync (3rd party), and Effect-Type
tracking.
• Largest Contributors:
Stefan Zeiger (170k),
Christopher Vogt (11k),
Hemant Kumar (1.7k)
• Compile Time Queries (default),
automatic fallback to Runtime
• APIs for Sync, Async, and via
Finagle. Only Streaming for
Cassandra.
• Largest Contributors:
Flavio Brasil (79k),
Mykhailo Osypov (5.4k),
Juliano Alves (2.8k),
Michael Ledin (2.7k),
Gustavo Amigo (2.6k),
jilen (2.4k),
Subhobrata Dey (2k),
Anatomy of a Slick Query
TableQuery[Person].filter(_.age > 10)
TableQuery[Person].filter(age =>
columnExtensionMethods(person.age).>(LiteralColumn[Int](10))
)((CanBeQueryCondition.BooleanColumnCanBeQueryCondition))
Filter(
from: Table(
PERSON,
Path(NAME), Path(AGE)…
)
where: Apply(
Function >
arg0: Path AGE
arg1: LiteralNode(10)
)
)
Expanded Implicits
Slick AST
select
NAME,
AGE…
from PERSON where AGE > 10
Query
Scala Compiler
~Scala Code
JDBC
Context
Evaluator
Anatomy of a Quill Query
quote{ query[Person].filter(_.age > 10) }
EntityQuery[Person]).filter(((x$1: Person) => x$1.age.>(10)))
select
NAME,
AGE…
from PERSON where AGE > 10
(Compile Time) Queries
querySchema("Person").filter(
x1 => x1.age > 10)
.map(x => (x.id, x.name, x.age))
Scala Compiler
Macro Engine
Quasi Quote Parser
Scala AST
Quill AST
Which one is Better? Slick Quill
Usability
Reliability
Extension Friendliness
Streaming
Testing
Ecosystem
Bonus
Usability
v.s.
Previously, on PHASE
SELECT DISTINCT
account.name, alias,
CASE WHEN code = 'EV'
THEN cast(account.number AS VARCHAR)
ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY,
CASE WHEN order_permission IN ('A', 'S')
THEN 'ST' ELSE 'ENH' END
FROM (
SELECT DISTINCT mc.alias, mc.code, mc.order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
UNION ALL
SELECT DISTINCT sc.alias, 'EV' AS code, part.order_permission, sc.account_tag
FROM SERVICE_CLIENTS sc
JOIN REGISTRY r ON r.alias = sc.alias AND r.record_type = 'S' AND r.market = 'us'
JOIN PARTNERSHIPS part ON part.id = sc.partnership_fk) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number)
ON (accountType.mapping_type = 0)
OR (accountType.mapping_type = 2 AND account.tag = client.account_tag)
OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
SELECT DISTINCT
account.name, alias,
CASE (...) AS OFFICIAL_IDENTITY,
CASE (...)
FROM (
SELECT DISTINCT
mc.alias, mc.code, mc.order_permission, mc.account_tag (code, alias, perm, tag)
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON (alias) AND (otherConditions)
UNION ALL
SELECT DISTINCT
sc.alias, 'EV' AS code, part.order_permission, sc.account_tag (code, alias, perm, tag)
FROM SERVICE_CLIENTS sc
JOIN REGISTRY r ON (alias) AND (otherConditions)
JOIN PARTNERSHIPS part ON (id <-> fk)) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON (account_type)
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number)
)
ON (possibly anything...)
OR (possibly the account tag...)
OR (possibly the alias...) → All Depending on the accountType
SELECT DISTINCT
account.name, alias,
CASE (...) AS OFFICIAL_IDENTITY,
CASE (...)
FROM (
SELECT DISTINCT
mc.alias, mc.code, mc.order_permission, mc.account_tag (code, alias, perm, tag)
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON (alias) AND (otherConditions)
UNION ALL
SELECT DISTINCT
sc.alias, 'EV' AS code, part.order_permission, sc.account_tag (code, alias, perm, tag)
FROM SERVICE_CLIENTS sc
JOIN REGISTRY r ON (alias) AND (otherConditions)
JOIN PARTNERSHIPS part ON (id <-> fk)) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON (account_type)
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number)
)
ON (possibly anything...)
OR (possibly the account tag...)
OR (possibly the alias...) → All Depending on the accountType
CREATE FUNCTION dbo.merchantClientsUdf (@market)
RETURNS table as RETURN (
SELECT DISTINCT
alias, code, order_permission, account_tag
FROM MERCHANT_CLIENTS merchantClient
JOIN REGISTRY entry
ON entry.alias = merchantClient.alias
WHERE entry.market = @market
AND entry.record_type = 'M')
CREATE VIEW CLIENT_ACCOUNTS AS
SELECT DISTINCT
account.name, alias,
CASE WHEN code = 'EV'
THEN cast(account.number AS VARCHAR)
ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY,
CASE WHEN order_permission IN ('A', 'S')
THEN 'ST' ELSE 'ENH' END
FROM (select * from merchantClientsUdf ('us') union
select * from serviceClientsUdf ('us')) as client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number)
ON (accountType.mapping_type = 0)
OR (accountType.mapping_type = 2 AND account.tag = client.account_tag)
OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
CREATE FUNCTION dbo.serviceClientsUdf (@market)
RETURNS table as RETURN (
SELECT DISTINCT
alias, code, order_permission, account_tag
FROM SERVICE_CLIENTS serviceClient
JOIN REGISTRY entry
ON entry.alias = serviceClient.alias
AND entry.record_type = 'S'
AND entry.market = @market
JOIN PARTNERSHIPS partnership
ON partnership.id = serviceClient.partnership_fk)
CREATE FUNCTION dbo.merchantClientsUdf (@market)
RETURNS table as RETURN (…)
CREATE VIEW EU_CLIENT_ACCOUNTS AS
SELECT DISTINCT
account.name, alias,
CASE WHEN code = 'EV'
THEN cast(account.number AS VARCHAR)
ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY,
CASE WHEN order_permission IN ('A', 'S')
THEN 'ST' ELSE 'ENH' END
FROM (select * from merchantClientsUdf ('eu') union
select * from enhancedServiceClientsUdf ('eu')) as client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number)
ON (accountType.mapping_type = 0)
OR (accountType.mapping_type = 2 AND account.tag = client.account_tag)
OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
CREATE FUNCTION dbo.enhancedServiceClientsUdf (@market)
RETURNS table as RETURN (
SELECT DISTINCT
alias, code, order_permission, account_tag
FROM SERVICE_CLIENTS serviceClient
JOIN REGISTRY entry
ON (... entry.market = @market ...)
JOIN PARTNERSHIPS partnership ON (…)
JOIN PARTNERSHIP_CODES pc
ON partnership.ID = pc.partnership_fk
CREATE FUNCTION dbo.merchantClientsUdf (@market)
RETURNS table as RETURN (…)
CREATE VIEW CA_CLIENT_ACCOUNTS AS
SELECT DISTINCT
account.name, alias,
CASE WHEN code = 'EV'
THEN cast(account.number AS VARCHAR)
ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY,
CASE WHEN order_permission IN ('A', 'S')
THEN 'ST' ELSE 'ENH' END
FROM (select * from merchantClientsUdf ('ca')) as client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number)
ON (accountType.mapping_type = 0)
OR (accountType.mapping_type = 2 AND account.tag = client.account_tag)
OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
~O(N)/2 Codebase Size Per N Business Units!
Tables:
• merchantClientsUd
f
• serviceClientsUdf
• CLIENT_ACCOUNTS
• merchantClientsUdf
• serviceClientsUdf
• US_CLIENT_ACCOUNTS
• enhancedServiceClientsUdf +
• EU_CLIENT_ACCOUNTS +
• CA_CLIENT_ACCOUNTS +
Tables:
= (Still) Lots of Technical Debt
… and behold!
def merchantClientsUdf(market:String):Query[(String, String, Char, String)] = {
for {
mc <- merchantClients
r <- registry
if (r.alias === mc.alias && r.market === market
&& r.recordType === 'M')
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
def merchantClientsUdf(market:String):Query[(String, String, Char, String)] = {
for {
mc <- merchantClients
r <- registry
if (r.alias === mc.alias && r.market === market
&& r.recordType === 'M')
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
This Is a Lie
def merchantClientsUdf(market:String):
Query[(Rep[String],Rep[String],Rep[Char],Rep[String]),
(String,String,Char, String),Seq]=
{
for {
mc <- merchantClients
r <- registry
if (r.alias === mc.alias && r.market === market
&& r.recordType === 'M')
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
The Truth Is
def merchantClientsUdf(market:String):
Query[ClientLifted,
Client,Seq] =
{
for {
mc <- merchantClients
r <- registry
if (r.alias === mc.alias && r.market === market
&& r.recordType === 'M')
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
The Truth Is
def merchantClientsUdf(market:String) = quote {
for {
mc <- merchantClients
r <- registry
if (r.alias == mc.alias && r.market == lift(market)
&& r.recordType == "M")
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
Quill Is Similar
def merchantClientsUdf(market:String):
Quoted[Query[(String,String,Char,String)]]=
quote {
for {
mc <- merchantClients
r <- registry
if (r.alias == mc.alias && r.market == lift(market)
&& r.recordType == "M")
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
… but with sane type signatures
def merchantClientsUdf(market:String):
Quoted[Query[(String,String,Char,String)]]=
quote {
for {
mc <- mercantClients
r <- registry
if (r.alias == mc.alias && r.market == lift(market)
&& r.recordType == "M")
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
… but with sane type signatures
def merchantClientsUdf(market:String):
Quoted[Query[Client]]=
quote {
for {
mc <- merchantClients
r <- registry
if (r.alias == mc.alias && r.market == lift(market)
&& r.recordType == "M")
}
yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
WHERE r.market = 'us' AND r.record_type = 'M'
… with Case Classes it’s even better
def merchantClientsUdf(market:String):
Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]]
= quote {
for {
mc <- merchantClients
r <- registry
if (r.alias == mc.alias && r.market == lift(market)
&& r.recordType == "M")
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
It’s a bit touchy with Optionals
… and for a reason
SELECT DISTINCT
mc.alias, mc.code, order_permission,
mc.account_tag
FROM MERCHANT_CLIENTS mc
JOIN REGISTRY r ON r.alias = mc.alias
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG = null
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG is null
Always False
Can be True
(null = null) = false
Says:
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG = null
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG is null
Always False
Can be True
(null = null) = false
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG = null
select * from MERCHANT_CLIENTS
where ACCOUNT_TAG is null
Always False
Can be True
Can be True
(null = null) = true
SET ANSI_NULLS OFF
def merchantClientsUdf(market:String):
Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]]
= quote {
for {
mc <- mercantClients
r <- registry
if (r.alias.exists(rr == mc.alias.exists(_ == rr))
&& r.market.exists(_ == lift(market))
&& r.recordType.exists(_ == "M"))
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
This will solve the problem…
... or Make Some Operators
implicit class OptionalExtensions[T](o:Option[T]) {
def ===(p:Option[T]) = {
quote { o.exists(oo =>p.exists(_ == oo)) }
}
def ~=~(p:Option[T]) = {
quote { o.exists(oo =>p.exists(_ == oo)) }
}
def ~==(p:T) = {
quote { o.exists(_ == p) }
}
}
implicit class PlainExtensions[T](o:T) {
def ==~(p:Option[T]) = {
quote{ p.exists(_ == o) }
}
}
def merchantClientsUdf(market:String):
Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]]
= quote {
for {
mc <- mercantClients
r <- registry
if (r.alias ~=~ mc.alias
&& r.market ~== lift(market)
&& r.recordType ~== "M")
} yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
… and it can be remedied
Overview…
def merchantClientsUdf(market:String):Query[ClientLifted, Client, Seq] =
for {
mc <- mercantClients
r <- registry
if (r.alias === mc.alias && r.market === market && r.recordType === 'M')
} yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag)
def serviceClientsUdf(market:String):Query[ClientLifted, Client, Seq] =
for {
sc <- serviceClients
r <- registry
if (r.alias === sc.alias && r.market === market && r.recordType === 'S')
part <- parnerships
if (part.id === sc.partnershipFk)
} yield Client(sc.alias, "EV".bind.?, part.orderPermission, sc.accountTag)
def clients(market:String):Query[ClientLifted, Client, Seq]
= merchantClientsUdf(market) ++ serviceClientsUdf(market)
def merchantClientsUdf(market:String):Quoted[Query[Client]] = quote {
for {
mc <- mercantClients
r <- registry
if (r.alias ~=~ mc.alias) && (r.market ~=~ lift(market)) && (r.recordType ~== "M")
} yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag)
}
def serviceClientsUdf(market:String):Quoted[Query[Client]] = quote {
for {
sc <- serviceClients
r <- registry
if (r.alias ~=~ sc.alias) && (r.market ~== lift(market)) && (r.recordType ~== "S")
part <- parnerships
if (part.id == sc.partnershipFk)
} yield Client(sc.alias, Some("EV"), part.orderPermission, sc.accountTag)
}
def clients(market:String):Quoted[Query[Client]] = quote {
merchantClientsUdf(market) ++ serviceClientsUdf(market)
}
Stepping
Back
SELECT DISTINCT
account.name, alias,
CASE (...) AS OFFICIAL_IDENTITY,
CASE (...)
FROM (...) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON (account_type)
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number)
)
ON (accountType.mapping_type = 0)
OR (possibly the account tag...)
OR (possibly the alias...)
name alias OFFICIAL_IDENTITY perm
TUNV FNF 111 ENH
TUNV ACME 111AC ENH
SIADV FNF 456 ENH
AUNV FNF 222 ENH
AUNV ACME 222AC ENH
ACMEINV ACME 808AC ENH
YOGADV YOGL 123 ST
TUNV YOGL 111 ST
AUNV YOGL 222 ST
SELECT DISTINCT
account.name, alias,
CASE (...) AS OFFICIAL_IDENTITY,
CASE (...)
FROM (...) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON (account_type)
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number)
)
ON (possibly anything...)
OR (accountType.mapping_type = 2 AND account.tag = client.account_tag)
OR (possibly the alias...)
name alias OFFICIAL_IDENTITY perm
TUNV FNF 111 ENH
TUNV ACME 111AC ENH
SIADV FNF 456 ENH
AUNV FNF 222 ENH
AUNV ACME 222AC ENH
ACMEINV ACME 808AC ENH
YOGADV YOGL 123 ST
TUNV YOGL 111 ST
AUNV YOGL 222 ST
SELECT DISTINCT
account.name, alias,
CASE (...) AS OFFICIAL_IDENTITY,
CASE (...)
FROM (...) client
INNER JOIN (
dbo.ACCOUNTS account
INNER JOIN ACCOUNT_TYPES accountType ON (account_type)
LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number)
)
ON (possibly anything...)
OR (possibly the account tag...)
OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
name alias OFFICIAL_IDENTITY perm
TUNV FNF 111 ENH
TUNV ACME 111AC ENH
SIADV FNF 456 ENH
AUNV FNF 222 ENH
AUNV ACME 222AC ENH
ACMEINV ACME 808AC ENH
YOGADV YOGL 123 ST
TUNV YOGL 111 ST
AUNV YOGL 222 ST
def mappingConditionsMet(
mappingType: Rep[Int],
accountTag: Rep[String],
clientTag: Rep[String],
clientAlias: Rep[String],
dedicatedAlias: Rep[String]
):Rep[Int] =
Case.If(mappingType === 0).Then(1)
.If(mappingType === 2 && accountTag === clientTag).Then(1)
.If(mappingType === 1 && clientAlias === dedicatedAlias).Then(1)
.Else(0)
def accountMapping(clients:Query[Client]):
Query[
ClientsLifted,AccountsLifted,AccountTypesLifted,Option[Rep[DedicatedAccounts]]
Clients,Accounts,AccountTypes,Option[DedicatedAccounts],
Seq
] = {
for {
(account, accountType, dedicatedAccount) <- allAccounts
client <- clients if (mappingConditionsMet(
accountType.mappingType.getOrElse(0), account.tag.getOrElse(""),
client.accountTag.getOrElse(""), client.alias.getOrElse(""),
dedicatedAccount.map(_.clientAlias).flatten.getOrElse("")) == 1)
} yield (client, account, accountType, dedicatedAccount)
}
val mappingConditionsMet = quote {
(
mappingType: Option[Int],
accountTag: Option[String],
clientTag: Option[String],
clientAlias: Option[String],
dedicatedAlias: Option[Option[String]]
) =>
if (mappingType == 0) 1
else if ((mappingType == 2) && (accountTag ==~ clientTag)) 1
else if ((mappingType == 1) && (dedicatedAlias.exists(_ ~=~ clientAlias))) 1
else 0
}
def accountMapping(clients:Quoted[Query[Client]]):
Quoted[Query[(Client, Accounts, AccountTypes, Option[DedicatedAccounts])]] =
quote {
for {
(account, accountType, dedicatedAccount) <- allAccounts
client <- clients if (
mappingConditionsMet(
accountType.mappingType, account.tag,
client.accountTag, client.otherAlias,
dedicatedAccount.map(_.clientAlias)
) == 1)
} yield (client, account, accountType, dedicatedAccount)
}
val mappingConditionsMet:
Quoted[(Int, String, Option[String], Option[String], Option[Option[String]]) => Int] = quote {
(
mappingType: Option[Int],
accountTag: Option[String],
clientTag: Option[String],
clientAlias: Option[String],
dedicatedAlias: Option[Option[String]]
) =>
if (mappingType == 0) 1
else if ((mappingType == 2) && (accountTag ==~ clientTag)) 1
else if ((mappingType == 1) && (dedicatedAlias.exists(_ ~=~ clientAlias))) 1
else 0
}
def accountMapping(clients:Quoted[Query[Client]]):
Quoted[Query[(Client, Accounts, AccountTypes, Option[DedicatedAccounts])]] =
quote {
for {
(account, accountType, dedicatedAccount) <- allAccounts
client <- clients if (
mappingConditionsMet(
accountType.mappingType, account.tag,
client.accountTag, client.otherAlias,
dedicatedAccount.map(_.clientAlias)
) == 1)
} yield (client, account, accountType, dedicatedAccount)
}
Actions
DBIO.seq(
TableQuery[Person] += Person("Joe", "Roe")
)
DBIO.seq(
TableQuery[Person] ++=
Seq(Person("Joe", "Roe"), ...)
)
quote {
query[Person].insert(lift(Person("Joe", "Roe")))
}
quote {
liftQuery(List(Person("Joe", "Roe")), …)
.foreach(e => query[Person].insert(e))
}
Individual
Bulk
Actions Continued…
DBIO.seq(
TableQuery[Person].map(_.firstName) += ("Joe")
)
quote {
query[Person].insert(_.firstName -> lift("Joe"))
}
Insert
Specific
Columns
Actions Continued…
DBIO.seq(
(TableQuery[Person].returning(person.id)) += Record(0, "1")
)
DBIO.seq(
(TableQuery[Person] ].returning(person.id)) ++=
Seq(Person("Joe", "Roe"), ...)
)
quote {
query[Person].insert(lift(Record(0, "1"))).returning(_.id)
}
quote {
liftQuery(List(Record(0, "1")), …)
.foreach(e => query[Person].insert(e).returning(_.id))
}
Inserting
Returning Ids
Individual
Inserting
Returning Ids
Individual
Show me the Queries
SELECT
s189.s137, s189.s138, s189.s139, s189.s140, s176."NAME", s176."TAG", s176."NUMBER", s176."TYPE",
s177."ACCOUNT_TYPE", s177."MAPPING_TYPE", s47.s118, s47.s119, s47.s120
FROM "ACCOUNTS" s176
INNER JOIN "ACCOUNT_TYPES" s177
ON s176."TYPE" = s177."ACCOUNT_TYPE"
LEFT OUTER JOIN (
SELECT 1 AS s118, "ACCOUNT_NUMBER" AS s119, "CLIENT_ALIAS" AS s120
FROM "DEDICATED_ACCOUNTS") s47 ON s176."NUMBER" = s47.s119
INNER JOIN (
SELECT s179."ALIAS" AS s137, ? AS s138, s183."ORDER_PERMISSION" AS s139, s179."ACCOUNT_TAG" AS s140
FROM "SERVICE_CLIENTS" s179, "REGISTRY" s180, "PARTNERSHIPS" s183
WHERE ((((CASE WHEN (s179."ALIAS" IS NULL)
THEN ? ELSE cast(s179."ALIAS" AS VARCHAR(255)) END) = s180."ALIAS")
AND (s180."RECORD_TYPE" = 'M')) AND (s180."MARKET" = 'us')) AND (s183."ID" = s179."PARTNERSHIP_FK")
UNION ALL
SELECT s185."ALIAS" AS s137, s185."CODE" AS s138, s185."ORDER_PERMISSION" AS s139, s185."ACCOUNT_TAG" AS s140
FROM "MERCHANT_CLIENTS" s185, "REGISTRY" s186
WHERE (((CASE WHEN (s185."ALIAS" IS NULL)
THEN ? ELSE cast(s185."ALIAS" AS VARCHAR(255)) END) = s186."ALIAS") AND (s186."RECORD_TYPE" = 'M')) AND (s186."MARKET" = 'us'))
s189
ON (CASE WHEN ((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 0)
THEN 1
WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 2) AND ((CASE WHEN (s176."TAG" IS NULL)
THEN ? ELSE cast(s176."TAG" AS VARCHAR(255)) END) = (CASE WHEN (s189.s140 IS NULL)
THEN ? ELSE cast(s189.s140 AS VARCHAR(255)) END)))
THEN 1
WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 1) AND ((CASE WHEN (s189.s137 IS NULL)
THEN ? ELSE cast(s189.s137 AS VARCHAR(255)) END) = (CASE WHEN ((CASE WHEN (s47.s118 IS NOT NULL)
THEN s47.s120 ELSE NULL END) IS NULL)
THEN ? ELSE cast((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120
ELSE NULL END) AS VARCHAR(255)) END)))
THEN 1
ELSE 0 END) = 1
SELECT
s189.s137, s189.s138, s189.s139, s189.s140, s176."NAME", s176."TAG", s176."NUMBER", s176."TYPE",
s177."ACCOUNT_TYPE", s177."MAPPING_TYPE", s47.s118, s47.s119, s47.s120
FROM "ACCOUNTS" s176
INNER JOIN "ACCOUNT_TYPES" s177
ON s176."TYPE" = s177."ACCOUNT_TYPE"
LEFT OUTER JOIN (
SELECT 1 AS s118, "ACCOUNT_NUMBER" AS s119, "CLIENT_ALIAS" AS s120
FROM "DEDICATED_ACCOUNTS") s47 ON s176."NUMBER" = s47.s119
INNER JOIN (
SELECT s179."ALIAS" AS s137, ? AS s138, s183."ORDER_PERMISSION" AS s139, s179."ACCOUNT_TAG" AS s140
FROM "SERVICE_CLIENTS" s179, "REGISTRY" s180, "PARTNERSHIPS" s183
WHERE ((((CASE WHEN (s179."ALIAS" IS NULL)
THEN ? ELSE cast(s179."ALIAS" AS VARCHAR(255)) END) = s180."ALIAS")
AND (s180."RECORD_TYPE" = 'M')) AND (s180."MARKET" = 'us')) AND (s183."ID" = s179."PARTNERSHIP_FK")
UNION ALL
SELECT s185."ALIAS" AS s137, s185."CODE" AS s138, s185."ORDER_PERMISSION" AS s139, s185."ACCOUNT_TAG" AS s140
FROM "MERCHANT_CLIENTS" s185, "REGISTRY" s186
WHERE (((CASE WHEN (s185."ALIAS" IS NULL)
THEN ? ELSE cast(s185."ALIAS" AS VARCHAR(255)) END) = s186."ALIAS") AND (s186."RECORD_TYPE" = 'M')) AND (s186."MARKET" = 'us'))
s189
ON (CASE WHEN ((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 0)
THEN 1
WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 2) AND ((CASE WHEN (s176."TAG" IS NULL)
THEN ? ELSE cast(s176."TAG" AS VARCHAR(255)) END) = (CASE WHEN (s189.s140 IS NULL)
THEN ? ELSE cast(s189.s140 AS VARCHAR(255)) END)))
THEN 1
WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL)
THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 1) AND ((CASE WHEN (s189.s137 IS NULL)
THEN ? ELSE cast(s189.s137 AS VARCHAR(255)) END) = (CASE WHEN ((CASE WHEN (s47.s118 IS NOT NULL)
THEN s47.s120 ELSE NULL END) IS NULL)
THEN ? ELSE cast((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120
ELSE NULL END) AS VARCHAR(255)) END)))
THEN 1
ELSE 0 END) = 1
SELECT
client.other_alias, client.code, client.order_permission, client.account_tag,
account_type.name, account_type.tag, account_type.number, account_type.type, account_type.account_type,
account_type.mapping_type,
x11.account_number, x11.client_alias
FROM (SELECT
account.type type, account.name name, account.number number,
account.tag tag, account_type.account_type account_type, account_type.mapping_type mapping_type
FROM accounts account, account_types account_type
WHERE account.type = account_type.account_type) account_type
LEFT JOIN dedicated_accounts x11
ON x11.account_number = account_type.number,
(
(SELECT sc.account_tag account_tag, sc.alias other_alias, ? code, part.order_permission order_permission
FROM service_clients sc, registry r, partnerships part
WHERE sc.alias = r.alias AND r.market = ? AND r.record_type = 'S' AND sc.partnership_fk = part.id)
UNION ALL
(SELECT mc.account_tag account_tag, mc.alias other_alias, mc.code code, mc.order_permission order_permission
FROM merchant_clients mc, registry r1
WHERE mc.alias = r1.alias AND r1.market = ? AND r1.record_type = 'M')
) client
WHERE CASE WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ? ELSE 0 END = 0
THEN 1
WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ?
ELSE 0 END = 2 AND client.account_tag = CASE WHEN account_type.tag IS NOT NULL
THEN ? ELSE '' END
THEN 1
WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ?
ELSE 0 END = 1 AND client.other_alias = x11.client_alias
THEN 1
ELSE 0 END = 1
SELECT
client.other_alias, client.code, client.order_permission, client.account_tag,
account_type.name, account_type.tag, account_type.number, account_type.type, account_type.account_type,
account_type.mapping_type,
x11.account_number, x11.client_alias
FROM (SELECT
account.type type, account.name name, account.number number,
account.tag tag, account_type.account_type account_type, account_type.mapping_type mapping_type
FROM accounts account, account_types account_type
WHERE account.type = account_type.account_type) account_type
LEFT JOIN dedicated_accounts x11
ON x11.account_number = account_type.number,
(
(SELECT sc.account_tag account_tag, sc.alias other_alias, ? code, part.order_permission order_permission
FROM service_clients sc, registry r, partnerships part
WHERE sc.alias = r.alias AND r.market = ? AND r.record_type = 'S' AND sc.partnership_fk = part.id)
UNION ALL
(SELECT mc.account_tag account_tag, mc.alias other_alias, mc.code code, mc.order_permission order_permission
FROM merchant_clients mc, registry r1
WHERE mc.alias = r1.alias AND r1.market = ? AND r1.record_type = 'M')
) client
WHERE CASE WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ? ELSE 0 END = 0
THEN 1
WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ?
ELSE 0 END = 2 AND client.account_tag = CASE WHEN account_type.tag IS NOT NULL
THEN ? ELSE '' END
THEN 1
WHEN CASE WHEN account_type.mapping_type IS NOT NULL
THEN ?
ELSE 0 END = 1 AND client.other_alias = x11.client_alias
THEN 1
ELSE 0 END = 1
A ‘Slight’ Footnote
Usability
Wins!
Obvious Question: Why not just get rid of U
Reliability
v.s.
Reliability – What are we
measuring?
Reliability – What are we
measuring?
Performance Under Load?
Reliability – What are we
measuring?
Performance Under Load?
Percentage Code Tested?
Reliability – What are we
measuring?
Performance Under Load?
Percentage Code Tested?
Reliability – What are we
measuring?
Performance Under Load?
Percentage Code Tested?
Generating Queries Correctly
@Entity
public class A{
@OneToMany
@JoinColumn(name= "fk")
private List<B> bs= new ArrayList<B>();
}
for(B b :a.bs) {
doSomethingWith(b)
}
@Entity
public class A{
@OneToMany
@JoinColumn(name= "fk")
private List<B> bs= new ArrayList<B>();
}
for(B b :a.bs) {
doSomethingWith(b)
}
EH Cache
EH Cache
EH Cache
JPQL
[10.11.2015 15:27:21.088] [ERROR]
[application-akka.actor.default-dispatcher-12]
[application] Error 'internalError - Cannot convertnode to SQLComprehension
| GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])]
| from s8: Bind :Vector[t17<{s18: String'}>]
| from s13: Table test_query :Vector[@t11<{id: String'}>]
| select: Pure t17 :Vector[t17<{s18: String'}>]
| value:StructNode : {s18: String'}
| s18: Path s13.id : String'
| by: Path s8.s18 : String'
'occured.
[10.11.2015 15:27:21.088] [ERROR]
[application-akka.actor.default-dispatcher-12]
[application] Error 'internalError - Cannot convertnode to SQLComprehension
| GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])]
| from s8: Bind :Vector[t17<{s18: String'}>]
| from s13: Table test_query :Vector[@t11<{id: String'}>]
| select: Pure t17 :Vector[t17<{s18: String'}>]
| value:StructNode : {s18: String'}
| s18: Path s13.id : String'
| by: Path s8.s18 : String'
'occured.
[10.11.2015 15:27:21.088] [ERROR]
[application-akka.actor.default-dispatcher-12]
[application] Error 'internalError - Cannot convertnode to SQLComprehension
| GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])]
| from s8: Bind :Vector[t17<{s18: String'}>]
| from s13: Table test_query :Vector[@t11<{id: String'}>]
| select: Pure t17 :Vector[t17<{s18: String'}>]
| value:StructNode : {s18: String'}
| s18: Path s13.id : String'
| by: Path s8.s18 : String'
'occured.
table.groupBy(_.id).map { case (c, tbl) =>
(c, tbl.length)
}
[10.11.2015 15:27:21.088] [ERROR]
[application-akka.actor.default-dispatcher-12]
[application] Error 'internalError - Cannot convertnode to SQLComprehension
| GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])]
| from s8: Bind :Vector[t17<{s18: String'}>]
| from s13: Table test_query :Vector[@t11<{id: String'}>]
| select: Pure t17 :Vector[t17<{s18: String'}>]
| value:StructNode : {s18: String'}
| s18: Path s13.id : String'
| by: Path s8.s18 : String'
'occured.
table.groupBy(_.id).map { case (c, tbl) =>
(c, tbl.length)
}
[10.11.2015 15:27:21.088] [ERROR]
[application-akka.actor.default-dispatcher-12]
[application] Error 'internalError - Cannot convertnode to SQLComprehension
| GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])]
| from s8: Bind :Vector[t17<{s18: String'}>]
| from s13: Table test_query :Vector[@t11<{id: String'}>]
| select: Puret17 :Vector[t17<{s18: String'}>]
| value:StructNode : {s18: String'}
| s18: Path s13.id : String'
| by: Path s8.s18 : String'
'occured.
table.groupBy(_.id).map { case (c, tbl) =>
(c, tbl.length)
}
Table
.groupBy(_.id)
.map { case (c, tbl) =>
(c, tbl.length)
}
Table
.groupBy(_.id)
.map { case (c, tbl) =>
(c, tbl.length)
}
Table
.drop(0)
.groupBy(_.id)
.map { case (c, tbl) =>
(c, tbl.length)
}
… and the solution is:
Slick Compiler
Phases:
Slick Compiler
Phases:
$ git log –p slick/src/main/scala/slick/compiler | awk ... | sort –rn
Total Adds Deletes
2933 5166 2233 Stefan Zeiger
50 50 0 Alexander Ioffe
14 35 21 deusaquilus
7 11 4 Sue
1 12 11 Iulian Dogariu
1 7 6 Ashutosh Agarwal
0 4 4 Mateusz Kołodziejczyk
-16 5 21 Ólafur Páll Geirsson
Who Understands
slick/src/main/scala/slick/compiler
???
$ git log –p slick/src/main/scala/slick/compiler | awk ... | sort –rn
Who Understands
slick/src/main/scala/slick/compiler
???
Total Adds Deletes
97.1% 97.7% 97.1% Stefan Zeiger
1.7% 0.9% 0.0% Alexander Ioffe
0.5% 0.7% 0.9% deusaquilus
0.2% 0.2% 0.2% Sue
0.0% 0.2% 0.5% Iulian Dogariu
0.0% 0.1% 0.3% Ashutosh Agarwal
0.0% 0.1% 0.2% Mateusz Kołodziejczyk
0.5% 0.1% 0.9% Ólafur Páll Geirsson
$ cat file.txt | awk '/./ && !author { author = $0; next } author {
ins[author] += $1; del[author] += $2 } /^$/ { author = ""; next } END {
for (a in ins) { printf "%10d %10d %10d %sn", ins[a] - del[a], ins[a],
del[a], a } }' | sort –rn
Total Adds Deletes
2933 5166 2233 Stefan Zeiger
50 50 0 Alexander Ioffe
14 35 21 deusaquilus
7 11 4 Sue
1 12 11 Iulian Dogariu
1 7 6 Ashutosh Agarwal
0 4 4 Mateusz Kołodziejczyk
-16 5 21 Ólafur Páll Geirsson
Who Understands
slick/src/main/scala/slick/compiler
???
$ cat file.txt | awk '/./ && !author { author = $0; next } author {
ins[author] += $1; del[author] += $2 } /^$/ { author = ""; next } END {
for (a in ins) { printf "%10d %10d %10d %sn", ins[a] - del[a], ins[a],
del[a], a } }' | sort –rn
Total Adds Deletes
2933 5166 2233 Stefan Zeiger
50 50 0 Alexander Ioffe
14 35 21 deusaquilus
7 11 4 Sue
1 12 11 Iulian Dogariu
1 7 6 Ashutosh Agarwal
0 4 4 Mateusz Kołodziejczyk
-16 5 21 Ólafur Páll Geirsson
Who Understands
slick/src/main/scala/slick/compiler
???
Enter the Wadler
We require that each query in the host language generate
exactly one SQL query. Alluding to twin perils Odysseus
sought to skirt when navigating the straits of Medina, we
seek to avoid Scylla and Charybdis. Scylla stands for the case
where the system fails to generate a query, signalling an error.
Charybdis stands for the case where the system generates
multiple queries, hindering efficiency. The overhead of
accessing a database is high, and to a first approximation cost
is proportional to the number of queries. We particularly want
to avoid a query avalanche, in the sense of Grust et al.
(2010), where a single host query generates a number of SQL
queries proportional to the size of the data
Our work avoids these perils. For T-LINQ, we prove the
Scylla and Charybdis theorem, characterising when a host
query is guaranteed to generate a single SQL query. All our
examples are easily seen to satisfy the characterisation in the
theorem, and indeed our theory yields the same SQL query
for each that one would write by hand. For P-LINQ, we
verify that its run time on our examples is comparable to that
of F# 2.0 and F# 3.0, in the cases where those systems
generate a query, and significantly faster in the one case
where F# 3.0 generates an avalanche—indeed, arbitrarily
faster as the size of the data grows.
Our work avoids these perils. For T-LINQ, we prove the
Scylla and Charybdis theorem, characterising when a host
query is guaranteed to generate a single SQL query. All our
examples are easily seen to satisfy the characterisation in the
theorem, and indeed our theory yields the same SQL query
for each that one would write by hand. For P-LINQ, we
verify that its run time on our examples is comparable to that
of F# 2.0 and F# 3.0, in the cases where those systems
generate a query, and significantly faster in the one case
where F# 3.0 generates an avalanche—indeed, arbitrarily
faster as the size of the data grows.
Enter Flavio
It goes without saying…
.length after groupBy + map works
object TestQuery{
valq= quote{
query[TestQuery].groupBy(_.id).map{case(c,tbl) =>
(c, tbl.size)
}
}
defrunQuery= run(q)
}
Reliability
Wins!
Obvious Question: Why not just rewrite Slick’s query compiler using Wadler’s Rules
Extension Friendliness
v.s.
Extension Friendliness
v.s.
Queries
def countBy[E, U, K, T](query:Query[E, U, Seq])(predicate:E=>K)(
implicit kshape: Shape[_ <: FlatShapeLevel, K, T, K],
vshape: Shape[_ <: FlatShapeLevel, E, _, E]):
Query[(K, Rep[Int]), (T, Int), scala.Seq] =
{
query.groupBy(predicate).map {
case(field, records) => (field, records.length) }
}
val q = countBy(accounts)(_.`type`)
def conditionalTake[E, U, T <% Ordered](
query:Query[E, U, Seq], numRecords:Option[Int]):
Query[E, U, Seq] = {
numRecords match {
case Some(number) => query.take(number)
case None => query
}
}
val q = conditionalTake(accounts, Some(10))
type count
ADVERTISING 4
INVENTORY 3
TAX 1
http://host/query?nr=100
NAME TAG NUMBER TYPE
TUNV NULL 111 TAX
YOGADV YOG 123 ADVERTISING
SIADV SID 456 ADVERTISING
SIADVA SIDA 457 ADVERTISING
UMBINV NULL 707 INVENTORY
FFADV FF 789 ADVERTISING
ACMEINV NULL 808 INVENTORY
YOGINV NULL 909 INVENTORY
def countBy[E, U, K, T](query:Query[E, U, Seq])(predicate:E=>K)(
implicit kshape: Shape[_ <: FlatShapeLevel, K, T, K],
vshape: Shape[_ <: FlatShapeLevel, E, _, E]):
Query[(K, Rep[Int]), (T, Int), scala.Seq] =
{
query.groupBy(predicate).map {
case(field, records) => (field, records.length) }
}
val q = countBy(accounts)(_.`type`)
def conditionalTake[E, U, T <% Ordered](
query:Query[E, U, Seq], numRecords:Option[Int]):
Query[E, U, Seq] = {
numRecords match {
case Some(number) => query.take(number)
case None => query
}
}
val q = conditionalTake(accounts, Some(10))
http://host/query?nr=100
NAME TAG NUMBER TYPE
TUNV NULL 111 TAX
YOGADV YOG 123 ADVERTISING
SIADV SID 456 ADVERTISING
SIADVA SIDA 457 ADVERTISING
UMBINV NULL 707 INVENTORY
FFADV FF 789 ADVERTISING
ACMEINV NULL 808 INVENTORY
YOGINV NULL 909 INVENTORY
type count
ADVERTISING 4
INVENTORY 3
TAX 1
def conditionalTake[E](query:Quoted[Query[E]], take:Option[Int]) =
take match {
case Some(num) => quote { query.take(lift(num)) }
case None => quote { query }
}
val q = quote {conditionalTake(accounts)(_.`type`) }
def countBy[E, K] =
quote {
(query:Query[E]) => (predicate: E=>K) =>
query.groupBy(e => predicate(e)).map {
case (field, records) => (field, records.size) }
}
val q = quote { countBy(accounts)(_.`type`) }
def conditionalTake[E](query:Quoted[Query[E]], take:Option[Int]) =
take match {
case Some(num) => quote { query.take(lift(num)) }
case None => quote { query }
}
val q = quote { countBy(accounts)(Some(10)) }
def countBy[E, K] =
quote {
(query:Query[E]) => (predicate: E=>K) =>
query.groupBy(e => predicate(e)).map {
case (field, records) => (field, records.size) }
}
val q = quote { countByAlt(accounts)(_.`type`) }
Also takes Quoted[Query[E]] ... but IDEs don’t always understand that
Extension Friendliness
Custom Outputs
(from yields, maps, etc...)
v.s.
Queries
trait CustomerDef { this: ProfileComponent =>
import profile.api._
case class Client(alias: Rep[Option[String]],
code: Rep[Option[String]], permission: Rep[Option[Char]], tag: Rep[Option[String]])
case class ClientRow(alias: Option[String],
code: Option[String], permission: Option[Char], tag: Option[String])
implicit object ClientRecordShape extends CaseClassShape[
Product,
(Rep[Option[String]], Rep[Option[String]], Rep[Option[Char]], Rep[Option[String]]), Client,
(Option[String], Option[String], Option[Char], Option[String]), ClientRow](Client.tupled,
ClientRow.tupled)
}
Client (alias: Option[String], code: Option[String],
permission: Option[String], tag: Option[String])
Structural Type-Based Extensions
case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime)
Structural Type-Based Extensions
case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime)
implicit class TemporalObjectExtensions[T <: { def startedAt: LocalDateTime; def endedAt: LocalDateTime }](
records:Query[T])
{
def existedAt(date: LocalDateTime) =
quote {
records.filter(r => (lift(r.startedAt) < date) && lift(r.endedAt) > date))
}
}
val q = quote { query[ShippingOrder].existedAt(lift(now)) }
val q = quote { query[ProcessRequest].existedAt(lift(now)) }
val q = quote { query[Event].existedAt(lift(now)) }
No Infix Date Ops?
case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime)
case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime)
implicit class TemporalObjectExtensions[T <: { def startedAt: LocalDateTime; def endedAt: LocalDateTime }](
records:Query[T])
{
def existedAt(date: LocalDateTime) =
quote {
records.filter(r => (lift(r.startedAt) < date) && lift(r.endedAt) > date))
}
}
val q = quote { query[ShippingOrder].existedAt(lift(now)) }
val q = quote { query[ProcessRequest].existedAt(lift(now)) }
val q = quote { query[Event].existedAt(lift(now)) }
No Problem!
implicit classLocalDateOps(left:LocalDateTime){
def> (right:LocalDateTime)= quote(infix"$left >$right".as[Boolean])
def>= (right:LocalDateTime)= quote(infix"$left>=$right".as[Boolean])
def < (right:LocalDateTime)= quote(infix"$left <$right".as[Boolean])
def<= (right:LocalDateTime)= quote(infix"$left<=$right".as[Boolean])
}
Extension Friendliness
Wins!
Obvious Question: (Again) Why not just get rid of U
Streaming API
v.s.
Our Query Comes out…
def doPublish(ds:DataSource) = {
val pub: DatabasePublisher[_ <: Product] =
ds.stream(TableQuery[Accounts]result)
.mapResult(r => serializeToJson(r))
var akkaSource =
Source.fromPublisher(pub)
.map(r => ChunkStreamPart(r))
HttpResponse(entity =
HttpEntity.Chunked(format.contentType, akkaSource)
)
}
… of Here!
val route =
get {
pathPrefix("path" / "from" / "host") {
pathEndOrSingleSlash {
ctx => ctx.complete { doPublish(ds) }
}
}
}
… of Here!
val route =
get {
pathPrefix("path" / "from" / "host") {
pathEndOrSingleSlash {
ctx => ctx.complete { doPublish(ds) }
}
}
}
Streaming
Wins!
Obvious Question: Write Akka Extensions for Quill
… or just move the Monix-based API to Quill-Jdbc from Quill-Cassandra?
Testing
v.s.
What if we can test Queries…
class MyMemoryDriver extends ModifiedMemoryProfile { }
class MyHeapDriver extends RelationalTestDB {
type Driver = MemoryDriver
val driver: Driver = new MyMemoryDriver
}
Without a Database!
class AccountClientOrderMerchantSupplierTest extends FunSuite with BeforeAndAfter {
val heapDriver = new MyHeapDriver
val profile = heapDriver.driver.profile
import profile.api._
before { initializeEntireSchema() }
test("Create Accounts, Clients, Orders, Merchants, Suppliers and Test") {
db.run(for {
_ <- accounts ++= Seq(Account(...), ...)
_ <- clients ++= Seq(Client(...), ...)
_ <- orders ++= Seq(Order(...), ...)
_ <- merchants ++= Seq(Merchant(...), ...)
_ <- suppliers ++= Seq(Supplier(...), ...)
q <- giantQueryThatCombinesEverything()
_ = { assertRealityIsInLineWithExpectations(accounts, clients, orders, merchants, suppliers) }
} yield ())
}
Without a Database!
• Create Schema (~100 `Tables`)
• Initialize Schema with Dozens of Records
• Run Integration Tests with Real Production Data
• 200 ~ 400ms Per Test
Testing
Wins!
Obvious Question: Why not write a Memory Driver
for Quill AST, take some notes from slick.memory.QuoterInterpreter ?
Ecosystem
v.s.
• Event Sourcing (softwaremill/slick-eventsourcing)
• Data Migration (lastland/scala-forklift)
• Cats Integration (RMSone/slick-cats)
• Shapeless Integration (underscoreio/slickless)
• Blocking API (takezoe/blocking-slick)
• Cache (mslinn/quill-gen)
• Annotated Traits (nstojiljkovic/quill-trait)
General
• Quill Gen (mslinn/quill-gen)
• “type all the things”-style Generator (olafurpg/scala-db-codegen)
• slick.codegen.SourceCodeGenerator
• Codegen Plugin for SBT (tototoshi/sbt-slick-codegen)
Code Generation
• Quill Gen (mslinn/quill-gen)
• Annotated Traits (nstojiljkovic/quill-trait)
• “type all the things”-style Generator (olafurpg/scala-db-codegen)
• slick.codegen.SourceCodeGenerator
• Codegen Plugin for SBT (tototoshi/sbt-slick-codegen)
Code Creation / Generation
Ecosystem
Wins!
Bonus
DataFrame API
orders.as("o")
// Customer has a Location
.join(customers.as("c"), customers("id") === orders("customer"))
.join(destinations.as("d"), destinations("id") === customers("destination"))
// Supplier has a destination
.join(suppliers.as("s"), suppliers("id") === orders("supplier"))
.join(warehouses.as("w"), warehouses("supplier") === suppliers("id"))
.where(warehouses("address") === destinations("address"))
.select(
$"o.timePlaced",
$"c.firstName", $"c.LastName",
$"w.address",
$"d.address"
Spark 1.6.x Version
The Spark Dataset API brings the best of RDD and Data
Frames together, for type safety and user functions that run
directly on existing JVM types.
A Dataset is a strongly-typed, immutable collection
of objects that are mapped to a relational schema.
Dataset API
orders.as("o")
// Customer has a Location
.join(customers.as("c"), customers("id") === orders("customer"))
.join(destinations.as("d"), destinations("id") === customers("destination"))
// Supplier has a destination
.join(suppliers.as("s"), suppliers("id") === orders("supplier"))
.join(warehouses.as("w"), warehouses("supplier") === suppliers("id"))
.where(warehouses("address") === destinations("address"))
.select(
$"o.timePlaced",
$"c.firstName", $"c.LastName",
$"w.address",
$"d.address"
Back to the Trenches?
select
o.timePlaced,
c.firstName, c.LastName,
w.address,
d.address
from
orders o
join customers c on c.id = o.customer
join destinations d on d.id = c.destination
join suppliers s on s.id = o.supplier
join warehouses w on w.supplier = s.id
where
warehouses.address = destinations.address
Salvation Cometh... in the form of a QuillSparkContext
def sameAreaOrder = quote {
for {
o <- orders
c <- customers if (c.id === o.customer)
d <- destinations if (d.id === c.destination)
s <- suppliers if (s.id === o.supplier)
w <- warehouses if (w.supplier === s.id)
} yield (
o.timePlaced,
c.firstName, c.LastName,
w.address,
d.address
)
}
def sameAreaOrder = quote {
for {
o <- query[Orders]
c <- query[Customers] if (c.id === o.customer)
d <- query[Destinations] if (d.id === c.destination)
s <- query[Suppliers] if (s.id === o.supplier)
w <- query[Warehouses] if (w.supplier === s.id)
} yield (
o.timePlaced,
c.firstName, c.LastName,
w.address,
d.address
)
}
Salvation Cometh... in the form of a QuillSparkContext
Let’s Take a Step Back
Circa 2015
Circa 2015
Future of SQL Likely Here
Every single one of these already
has a JDBC Driver!
SQL is Back!
Slick Quill
Usability
Reliability
Extension Friendliness
Streaming
Testing
Ecosystem
Bonus (Spark)
Thank You

Mais conteúdo relacionado

Mais procurados

The lazy programmer's guide to writing thousands of tests
The lazy programmer's guide to writing thousands of testsThe lazy programmer's guide to writing thousands of tests
The lazy programmer's guide to writing thousands of testsScott Wlaschin
 
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1Philip Schwarz
 
Capabilities for Resources and Effects
Capabilities for Resources and EffectsCapabilities for Resources and Effects
Capabilities for Resources and EffectsMartin Odersky
 
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...Philip Schwarz
 
Implementing the IO Monad in Scala
Implementing the IO Monad in ScalaImplementing the IO Monad in Scala
Implementing the IO Monad in ScalaHermann Hueck
 
Izumi 1.0: Your Next Scala Stack
Izumi 1.0: Your Next Scala StackIzumi 1.0: Your Next Scala Stack
Izumi 1.0: Your Next Scala Stack7mind
 
Functional Programming Patterns (BuildStuff '14)
Functional Programming Patterns (BuildStuff '14)Functional Programming Patterns (BuildStuff '14)
Functional Programming Patterns (BuildStuff '14)Scott Wlaschin
 
Functional Domain Modeling - The ZIO 2 Way
Functional Domain Modeling - The ZIO 2 WayFunctional Domain Modeling - The ZIO 2 Way
Functional Domain Modeling - The ZIO 2 WayDebasish Ghosh
 
Bootiful Development with Spring Boot and React
Bootiful Development with Spring Boot and ReactBootiful Development with Spring Boot and React
Bootiful Development with Spring Boot and ReactVMware Tanzu
 
Why The Free Monad isn't Free
Why The Free Monad isn't FreeWhy The Free Monad isn't Free
Why The Free Monad isn't FreeKelley Robinson
 
Class 3 - PHP Functions
Class 3 - PHP FunctionsClass 3 - PHP Functions
Class 3 - PHP FunctionsAhmed Swilam
 
Sequence and Traverse - Part 2
Sequence and Traverse - Part 2Sequence and Traverse - Part 2
Sequence and Traverse - Part 2Philip Schwarz
 
non-strict functions, bottom and scala by-name parameters
non-strict functions, bottom and scala by-name parametersnon-strict functions, bottom and scala by-name parameters
non-strict functions, bottom and scala by-name parametersPhilip Schwarz
 
Ad hoc Polymorphism using Type Classes and Cats
Ad hoc Polymorphism using Type Classes and CatsAd hoc Polymorphism using Type Classes and Cats
Ad hoc Polymorphism using Type Classes and CatsPhilip Schwarz
 
Boost your productivity with Scala tooling!
Boost your productivity  with Scala tooling!Boost your productivity  with Scala tooling!
Boost your productivity with Scala tooling!MeriamLachkar1
 
A Prelude of Purity: Scaling Back ZIO
A Prelude of Purity: Scaling Back ZIOA Prelude of Purity: Scaling Back ZIO
A Prelude of Purity: Scaling Back ZIOJorge Vásquez
 

Mais procurados (20)

The lazy programmer's guide to writing thousands of tests
The lazy programmer's guide to writing thousands of testsThe lazy programmer's guide to writing thousands of tests
The lazy programmer's guide to writing thousands of tests
 
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1
Scala 3 by Example - Algebraic Data Types for Domain Driven Design - Part 1
 
Capabilities for Resources and Effects
Capabilities for Resources and EffectsCapabilities for Resources and Effects
Capabilities for Resources and Effects
 
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit – Haskell and...
 
Implementing the IO Monad in Scala
Implementing the IO Monad in ScalaImplementing the IO Monad in Scala
Implementing the IO Monad in Scala
 
Izumi 1.0: Your Next Scala Stack
Izumi 1.0: Your Next Scala StackIzumi 1.0: Your Next Scala Stack
Izumi 1.0: Your Next Scala Stack
 
Functional Programming Patterns (BuildStuff '14)
Functional Programming Patterns (BuildStuff '14)Functional Programming Patterns (BuildStuff '14)
Functional Programming Patterns (BuildStuff '14)
 
Functional Domain Modeling - The ZIO 2 Way
Functional Domain Modeling - The ZIO 2 WayFunctional Domain Modeling - The ZIO 2 Way
Functional Domain Modeling - The ZIO 2 Way
 
Bootiful Development with Spring Boot and React
Bootiful Development with Spring Boot and ReactBootiful Development with Spring Boot and React
Bootiful Development with Spring Boot and React
 
Why The Free Monad isn't Free
Why The Free Monad isn't FreeWhy The Free Monad isn't Free
Why The Free Monad isn't Free
 
Class 3 - PHP Functions
Class 3 - PHP FunctionsClass 3 - PHP Functions
Class 3 - PHP Functions
 
Sequence and Traverse - Part 2
Sequence and Traverse - Part 2Sequence and Traverse - Part 2
Sequence and Traverse - Part 2
 
non-strict functions, bottom and scala by-name parameters
non-strict functions, bottom and scala by-name parametersnon-strict functions, bottom and scala by-name parameters
non-strict functions, bottom and scala by-name parameters
 
Ad hoc Polymorphism using Type Classes and Cats
Ad hoc Polymorphism using Type Classes and CatsAd hoc Polymorphism using Type Classes and Cats
Ad hoc Polymorphism using Type Classes and Cats
 
Constructor
ConstructorConstructor
Constructor
 
Boost your productivity with Scala tooling!
Boost your productivity  with Scala tooling!Boost your productivity  with Scala tooling!
Boost your productivity with Scala tooling!
 
Java loops
Java loopsJava loops
Java loops
 
Clean coding-practices
Clean coding-practicesClean coding-practices
Clean coding-practices
 
A Prelude of Purity: Scaling Back ZIO
A Prelude of Purity: Scaling Back ZIOA Prelude of Purity: Scaling Back ZIO
A Prelude of Purity: Scaling Back ZIO
 
Inheritance
InheritanceInheritance
Inheritance
 

Semelhante a Quill vs Slick Smackdown

Introduction to Quill - Postgres Conf Philly 2019
Introduction to Quill - Postgres Conf Philly 2019Introduction to Quill - Postgres Conf Philly 2019
Introduction to Quill - Postgres Conf Philly 2019Alexander Ioffe
 
Slick in Practice. Salvation from the Trenches of SQL
Slick in Practice. Salvation from the Trenches of SQLSlick in Practice. Salvation from the Trenches of SQL
Slick in Practice. Salvation from the Trenches of SQLAlexander Ioffe
 
TRIPACTIONS-PPT.pptx
TRIPACTIONS-PPT.pptxTRIPACTIONS-PPT.pptx
TRIPACTIONS-PPT.pptxNidhi498524
 
on SQL Managment studio(For the following exercise, use the Week 5.pdf
on SQL Managment studio(For the following exercise, use the Week 5.pdfon SQL Managment studio(For the following exercise, use the Week 5.pdf
on SQL Managment studio(For the following exercise, use the Week 5.pdfformaxekochi
 
Mocks Enabling Test-Driven Design
Mocks Enabling Test-Driven DesignMocks Enabling Test-Driven Design
Mocks Enabling Test-Driven DesignAlexandre Martins
 
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...Coupa Software
 
Swift 성능 이해하기
Swift 성능 이해하기Swift 성능 이해하기
Swift 성능 이해하기Hangyeol Lee
 
Oracle APPS :Receivables Auto Invoice
Oracle APPS :Receivables Auto InvoiceOracle APPS :Receivables Auto Invoice
Oracle APPS :Receivables Auto InvoiceSekhar Byna
 
Love Your Database Railsconf 2017
Love Your Database Railsconf 2017Love Your Database Railsconf 2017
Love Your Database Railsconf 2017gisborne
 
Hybrid rule engines (rulesfest 2010)
Hybrid rule engines (rulesfest 2010)Hybrid rule engines (rulesfest 2010)
Hybrid rule engines (rulesfest 2010)Geoffrey De Smet
 
GHC Participant Training
GHC Participant TrainingGHC Participant Training
GHC Participant TrainingAidIQ
 
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...HostedbyConfluent
 
Work in TDW
Work in TDWWork in TDW
Work in TDWsaso70
 
IBM Informix dynamic server 11 10 Cheetah Sql Features
IBM Informix dynamic server 11 10 Cheetah Sql FeaturesIBM Informix dynamic server 11 10 Cheetah Sql Features
IBM Informix dynamic server 11 10 Cheetah Sql FeaturesKeshav Murthy
 
Apex Enterprise Patterns: Building Strong Foundations
Apex Enterprise Patterns: Building Strong FoundationsApex Enterprise Patterns: Building Strong Foundations
Apex Enterprise Patterns: Building Strong FoundationsSalesforce Developers
 
Redux. From twitter hype to production
Redux. From twitter hype to productionRedux. From twitter hype to production
Redux. From twitter hype to productionJenya Terpil
 
Rails World 2023: Powerful Rails Features You Might Not Know
Rails World 2023: Powerful Rails Features You Might Not KnowRails World 2023: Powerful Rails Features You Might Not Know
Rails World 2023: Powerful Rails Features You Might Not KnowChris Oliver
 
Redux. From twitter hype to production
Redux. From twitter hype to productionRedux. From twitter hype to production
Redux. From twitter hype to productionFDConf
 

Semelhante a Quill vs Slick Smackdown (20)

Introduction to Quill - Postgres Conf Philly 2019
Introduction to Quill - Postgres Conf Philly 2019Introduction to Quill - Postgres Conf Philly 2019
Introduction to Quill - Postgres Conf Philly 2019
 
Slick in Practice. Salvation from the Trenches of SQL
Slick in Practice. Salvation from the Trenches of SQLSlick in Practice. Salvation from the Trenches of SQL
Slick in Practice. Salvation from the Trenches of SQL
 
TRIPACTIONS-PPT.pptx
TRIPACTIONS-PPT.pptxTRIPACTIONS-PPT.pptx
TRIPACTIONS-PPT.pptx
 
on SQL Managment studio(For the following exercise, use the Week 5.pdf
on SQL Managment studio(For the following exercise, use the Week 5.pdfon SQL Managment studio(For the following exercise, use the Week 5.pdf
on SQL Managment studio(For the following exercise, use the Week 5.pdf
 
Mocks Enabling Test-Driven Design
Mocks Enabling Test-Driven DesignMocks Enabling Test-Driven Design
Mocks Enabling Test-Driven Design
 
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...
Staying railsy - while scaling complexity or Ruby on Rails in Enterprise Soft...
 
Swift 성능 이해하기
Swift 성능 이해하기Swift 성능 이해하기
Swift 성능 이해하기
 
Payments On Rails
Payments On RailsPayments On Rails
Payments On Rails
 
Oracle APPS :Receivables Auto Invoice
Oracle APPS :Receivables Auto InvoiceOracle APPS :Receivables Auto Invoice
Oracle APPS :Receivables Auto Invoice
 
Love Your Database Railsconf 2017
Love Your Database Railsconf 2017Love Your Database Railsconf 2017
Love Your Database Railsconf 2017
 
Hybrid rule engines (rulesfest 2010)
Hybrid rule engines (rulesfest 2010)Hybrid rule engines (rulesfest 2010)
Hybrid rule engines (rulesfest 2010)
 
ABAP Cheat sheet
ABAP Cheat sheetABAP Cheat sheet
ABAP Cheat sheet
 
GHC Participant Training
GHC Participant TrainingGHC Participant Training
GHC Participant Training
 
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...
Powering Consistent, High-throughput, Real-time Distributed Calculation Engin...
 
Work in TDW
Work in TDWWork in TDW
Work in TDW
 
IBM Informix dynamic server 11 10 Cheetah Sql Features
IBM Informix dynamic server 11 10 Cheetah Sql FeaturesIBM Informix dynamic server 11 10 Cheetah Sql Features
IBM Informix dynamic server 11 10 Cheetah Sql Features
 
Apex Enterprise Patterns: Building Strong Foundations
Apex Enterprise Patterns: Building Strong FoundationsApex Enterprise Patterns: Building Strong Foundations
Apex Enterprise Patterns: Building Strong Foundations
 
Redux. From twitter hype to production
Redux. From twitter hype to productionRedux. From twitter hype to production
Redux. From twitter hype to production
 
Rails World 2023: Powerful Rails Features You Might Not Know
Rails World 2023: Powerful Rails Features You Might Not KnowRails World 2023: Powerful Rails Features You Might Not Know
Rails World 2023: Powerful Rails Features You Might Not Know
 
Redux. From twitter hype to production
Redux. From twitter hype to productionRedux. From twitter hype to production
Redux. From twitter hype to production
 

Último

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 

Último (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

Quill vs Slick Smackdown

  • 1. Slick v.s. Quill Smackdown Alexander Ioffe
  • 3. • 8 yrs old, 38K Lines • Embedded DSL (EDSL) • Supported Databases: SQLServer, Oracle, DB2, MySQL, PostgreSQL, SQLite, Derby, HSQLDB, H2 • 2 yrs old, 34k lines • Quoted DSL (QDSL) • Supported Databases: SQLServer MySQL, PostgreSQL, SQLite, H2, Spark, Cassandra, OrientDB
  • 4. • Runtime Queries (default), optional API for Compile-Time • APIs for Async, Streaming, Sync (3rd party), and Effect-Type tracking. • Largest Contributors: Stefan Zeiger (170k), Christopher Vogt (11k), Hemant Kumar (1.7k) • Compile Time Queries (default), automatic fallback to Runtime • APIs for Sync, Async, and via Finagle. Only Streaming for Cassandra. • Largest Contributors: Flavio Brasil (79k), Mykhailo Osypov (5.4k), Juliano Alves (2.8k), Michael Ledin (2.7k), Gustavo Amigo (2.6k), jilen (2.4k), Subhobrata Dey (2k),
  • 5. Anatomy of a Slick Query TableQuery[Person].filter(_.age > 10) TableQuery[Person].filter(age => columnExtensionMethods(person.age).>(LiteralColumn[Int](10)) )((CanBeQueryCondition.BooleanColumnCanBeQueryCondition)) Filter( from: Table( PERSON, Path(NAME), Path(AGE)… ) where: Apply( Function > arg0: Path AGE arg1: LiteralNode(10) ) ) Expanded Implicits Slick AST select NAME, AGE… from PERSON where AGE > 10 Query Scala Compiler ~Scala Code JDBC Context Evaluator
  • 6. Anatomy of a Quill Query quote{ query[Person].filter(_.age > 10) } EntityQuery[Person]).filter(((x$1: Person) => x$1.age.>(10))) select NAME, AGE… from PERSON where AGE > 10 (Compile Time) Queries querySchema("Person").filter( x1 => x1.age > 10) .map(x => (x.id, x.name, x.age)) Scala Compiler Macro Engine Quasi Quote Parser Scala AST Quill AST
  • 7. Which one is Better? Slick Quill Usability Reliability Extension Friendliness Streaming Testing Ecosystem Bonus
  • 10. SELECT DISTINCT account.name, alias, CASE WHEN code = 'EV' THEN cast(account.number AS VARCHAR) ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY, CASE WHEN order_permission IN ('A', 'S') THEN 'ST' ELSE 'ENH' END FROM ( SELECT DISTINCT mc.alias, mc.code, mc.order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' UNION ALL SELECT DISTINCT sc.alias, 'EV' AS code, part.order_permission, sc.account_tag FROM SERVICE_CLIENTS sc JOIN REGISTRY r ON r.alias = sc.alias AND r.record_type = 'S' AND r.market = 'us' JOIN PARTNERSHIPS part ON part.id = sc.partnership_fk) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number) ON (accountType.mapping_type = 0) OR (accountType.mapping_type = 2 AND account.tag = client.account_tag) OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
  • 11.
  • 12. SELECT DISTINCT account.name, alias, CASE (...) AS OFFICIAL_IDENTITY, CASE (...) FROM ( SELECT DISTINCT mc.alias, mc.code, mc.order_permission, mc.account_tag (code, alias, perm, tag) FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON (alias) AND (otherConditions) UNION ALL SELECT DISTINCT sc.alias, 'EV' AS code, part.order_permission, sc.account_tag (code, alias, perm, tag) FROM SERVICE_CLIENTS sc JOIN REGISTRY r ON (alias) AND (otherConditions) JOIN PARTNERSHIPS part ON (id <-> fk)) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON (account_type) LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number) ) ON (possibly anything...) OR (possibly the account tag...) OR (possibly the alias...) → All Depending on the accountType
  • 13. SELECT DISTINCT account.name, alias, CASE (...) AS OFFICIAL_IDENTITY, CASE (...) FROM ( SELECT DISTINCT mc.alias, mc.code, mc.order_permission, mc.account_tag (code, alias, perm, tag) FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON (alias) AND (otherConditions) UNION ALL SELECT DISTINCT sc.alias, 'EV' AS code, part.order_permission, sc.account_tag (code, alias, perm, tag) FROM SERVICE_CLIENTS sc JOIN REGISTRY r ON (alias) AND (otherConditions) JOIN PARTNERSHIPS part ON (id <-> fk)) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON (account_type) LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number) ) ON (possibly anything...) OR (possibly the account tag...) OR (possibly the alias...) → All Depending on the accountType
  • 14. CREATE FUNCTION dbo.merchantClientsUdf (@market) RETURNS table as RETURN ( SELECT DISTINCT alias, code, order_permission, account_tag FROM MERCHANT_CLIENTS merchantClient JOIN REGISTRY entry ON entry.alias = merchantClient.alias WHERE entry.market = @market AND entry.record_type = 'M') CREATE VIEW CLIENT_ACCOUNTS AS SELECT DISTINCT account.name, alias, CASE WHEN code = 'EV' THEN cast(account.number AS VARCHAR) ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY, CASE WHEN order_permission IN ('A', 'S') THEN 'ST' ELSE 'ENH' END FROM (select * from merchantClientsUdf ('us') union select * from serviceClientsUdf ('us')) as client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number) ON (accountType.mapping_type = 0) OR (accountType.mapping_type = 2 AND account.tag = client.account_tag) OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias) CREATE FUNCTION dbo.serviceClientsUdf (@market) RETURNS table as RETURN ( SELECT DISTINCT alias, code, order_permission, account_tag FROM SERVICE_CLIENTS serviceClient JOIN REGISTRY entry ON entry.alias = serviceClient.alias AND entry.record_type = 'S' AND entry.market = @market JOIN PARTNERSHIPS partnership ON partnership.id = serviceClient.partnership_fk)
  • 15. CREATE FUNCTION dbo.merchantClientsUdf (@market) RETURNS table as RETURN (…) CREATE VIEW EU_CLIENT_ACCOUNTS AS SELECT DISTINCT account.name, alias, CASE WHEN code = 'EV' THEN cast(account.number AS VARCHAR) ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY, CASE WHEN order_permission IN ('A', 'S') THEN 'ST' ELSE 'ENH' END FROM (select * from merchantClientsUdf ('eu') union select * from enhancedServiceClientsUdf ('eu')) as client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number) ON (accountType.mapping_type = 0) OR (accountType.mapping_type = 2 AND account.tag = client.account_tag) OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias) CREATE FUNCTION dbo.enhancedServiceClientsUdf (@market) RETURNS table as RETURN ( SELECT DISTINCT alias, code, order_permission, account_tag FROM SERVICE_CLIENTS serviceClient JOIN REGISTRY entry ON (... entry.market = @market ...) JOIN PARTNERSHIPS partnership ON (…) JOIN PARTNERSHIP_CODES pc ON partnership.ID = pc.partnership_fk
  • 16. CREATE FUNCTION dbo.merchantClientsUdf (@market) RETURNS table as RETURN (…) CREATE VIEW CA_CLIENT_ACCOUNTS AS SELECT DISTINCT account.name, alias, CASE WHEN code = 'EV' THEN cast(account.number AS VARCHAR) ELSE cast(account.number AS VARCHAR) + substring(alias, 1, 2) END AS OFFICIAL_IDENTITY, CASE WHEN order_permission IN ('A', 'S') THEN 'ST' ELSE 'ENH' END FROM (select * from merchantClientsUdf ('ca')) as client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON account.type = accountType.account_type LEFT JOIN DEDICATED_ACCOUNTS dedicated ON dedicated.account_number = account.number) ON (accountType.mapping_type = 0) OR (accountType.mapping_type = 2 AND account.tag = client.account_tag) OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias)
  • 17. ~O(N)/2 Codebase Size Per N Business Units! Tables: • merchantClientsUd f • serviceClientsUdf • CLIENT_ACCOUNTS • merchantClientsUdf • serviceClientsUdf • US_CLIENT_ACCOUNTS • enhancedServiceClientsUdf + • EU_CLIENT_ACCOUNTS + • CA_CLIENT_ACCOUNTS + Tables: = (Still) Lots of Technical Debt
  • 18.
  • 20. def merchantClientsUdf(market:String):Query[(String, String, Char, String)] = { for { mc <- merchantClients r <- registry if (r.alias === mc.alias && r.market === market && r.recordType === 'M') } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M'
  • 21. def merchantClientsUdf(market:String):Query[(String, String, Char, String)] = { for { mc <- merchantClients r <- registry if (r.alias === mc.alias && r.market === market && r.recordType === 'M') } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' This Is a Lie
  • 22. def merchantClientsUdf(market:String): Query[(Rep[String],Rep[String],Rep[Char],Rep[String]), (String,String,Char, String),Seq]= { for { mc <- merchantClients r <- registry if (r.alias === mc.alias && r.market === market && r.recordType === 'M') } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' The Truth Is
  • 23. def merchantClientsUdf(market:String): Query[ClientLifted, Client,Seq] = { for { mc <- merchantClients r <- registry if (r.alias === mc.alias && r.market === market && r.recordType === 'M') } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' The Truth Is
  • 24. def merchantClientsUdf(market:String) = quote { for { mc <- merchantClients r <- registry if (r.alias == mc.alias && r.market == lift(market) && r.recordType == "M") } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' Quill Is Similar
  • 25. def merchantClientsUdf(market:String): Quoted[Query[(String,String,Char,String)]]= quote { for { mc <- merchantClients r <- registry if (r.alias == mc.alias && r.market == lift(market) && r.recordType == "M") } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' … but with sane type signatures
  • 26. def merchantClientsUdf(market:String): Quoted[Query[(String,String,Char,String)]]= quote { for { mc <- mercantClients r <- registry if (r.alias == mc.alias && r.market == lift(market) && r.recordType == "M") } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' … but with sane type signatures
  • 27. def merchantClientsUdf(market:String): Quoted[Query[Client]]= quote { for { mc <- merchantClients r <- registry if (r.alias == mc.alias && r.market == lift(market) && r.recordType == "M") } yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag) } SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias WHERE r.market = 'us' AND r.record_type = 'M' … with Case Classes it’s even better
  • 28. def merchantClientsUdf(market:String): Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]] = quote { for { mc <- merchantClients r <- registry if (r.alias == mc.alias && r.market == lift(market) && r.recordType == "M") } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } It’s a bit touchy with Optionals
  • 29. … and for a reason SELECT DISTINCT mc.alias, mc.code, order_permission, mc.account_tag FROM MERCHANT_CLIENTS mc JOIN REGISTRY r ON r.alias = mc.alias
  • 30. select * from MERCHANT_CLIENTS where ACCOUNT_TAG = null select * from MERCHANT_CLIENTS where ACCOUNT_TAG is null Always False Can be True (null = null) = false Says:
  • 31. select * from MERCHANT_CLIENTS where ACCOUNT_TAG = null select * from MERCHANT_CLIENTS where ACCOUNT_TAG is null Always False Can be True (null = null) = false
  • 32. select * from MERCHANT_CLIENTS where ACCOUNT_TAG = null select * from MERCHANT_CLIENTS where ACCOUNT_TAG is null Always False Can be True Can be True (null = null) = true SET ANSI_NULLS OFF
  • 33. def merchantClientsUdf(market:String): Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]] = quote { for { mc <- mercantClients r <- registry if (r.alias.exists(rr == mc.alias.exists(_ == rr)) && r.market.exists(_ == lift(market)) && r.recordType.exists(_ == "M")) } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } This will solve the problem…
  • 34. ... or Make Some Operators implicit class OptionalExtensions[T](o:Option[T]) { def ===(p:Option[T]) = { quote { o.exists(oo =>p.exists(_ == oo)) } } def ~=~(p:Option[T]) = { quote { o.exists(oo =>p.exists(_ == oo)) } } def ~==(p:T) = { quote { o.exists(_ == p) } } } implicit class PlainExtensions[T](o:T) { def ==~(p:Option[T]) = { quote{ p.exists(_ == o) } } }
  • 35. def merchantClientsUdf(market:String): Query[Quoted[(Option[String], Option[String], Option[Char], Option[String])]] = quote { for { mc <- mercantClients r <- registry if (r.alias ~=~ mc.alias && r.market ~== lift(market) && r.recordType ~== "M") } yield (mc.alias, mc.code, mc.orderPermission, mc.accountTag) } … and it can be remedied
  • 37. def merchantClientsUdf(market:String):Query[ClientLifted, Client, Seq] = for { mc <- mercantClients r <- registry if (r.alias === mc.alias && r.market === market && r.recordType === 'M') } yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag) def serviceClientsUdf(market:String):Query[ClientLifted, Client, Seq] = for { sc <- serviceClients r <- registry if (r.alias === sc.alias && r.market === market && r.recordType === 'S') part <- parnerships if (part.id === sc.partnershipFk) } yield Client(sc.alias, "EV".bind.?, part.orderPermission, sc.accountTag) def clients(market:String):Query[ClientLifted, Client, Seq] = merchantClientsUdf(market) ++ serviceClientsUdf(market)
  • 38. def merchantClientsUdf(market:String):Quoted[Query[Client]] = quote { for { mc <- mercantClients r <- registry if (r.alias ~=~ mc.alias) && (r.market ~=~ lift(market)) && (r.recordType ~== "M") } yield Client(mc.alias, mc.code, mc.orderPermission, mc.accountTag) } def serviceClientsUdf(market:String):Quoted[Query[Client]] = quote { for { sc <- serviceClients r <- registry if (r.alias ~=~ sc.alias) && (r.market ~== lift(market)) && (r.recordType ~== "S") part <- parnerships if (part.id == sc.partnershipFk) } yield Client(sc.alias, Some("EV"), part.orderPermission, sc.accountTag) } def clients(market:String):Quoted[Query[Client]] = quote { merchantClientsUdf(market) ++ serviceClientsUdf(market) }
  • 40. SELECT DISTINCT account.name, alias, CASE (...) AS OFFICIAL_IDENTITY, CASE (...) FROM (...) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON (account_type) LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number) ) ON (accountType.mapping_type = 0) OR (possibly the account tag...) OR (possibly the alias...) name alias OFFICIAL_IDENTITY perm TUNV FNF 111 ENH TUNV ACME 111AC ENH SIADV FNF 456 ENH AUNV FNF 222 ENH AUNV ACME 222AC ENH ACMEINV ACME 808AC ENH YOGADV YOGL 123 ST TUNV YOGL 111 ST AUNV YOGL 222 ST
  • 41. SELECT DISTINCT account.name, alias, CASE (...) AS OFFICIAL_IDENTITY, CASE (...) FROM (...) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON (account_type) LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number) ) ON (possibly anything...) OR (accountType.mapping_type = 2 AND account.tag = client.account_tag) OR (possibly the alias...) name alias OFFICIAL_IDENTITY perm TUNV FNF 111 ENH TUNV ACME 111AC ENH SIADV FNF 456 ENH AUNV FNF 222 ENH AUNV ACME 222AC ENH ACMEINV ACME 808AC ENH YOGADV YOGL 123 ST TUNV YOGL 111 ST AUNV YOGL 222 ST
  • 42. SELECT DISTINCT account.name, alias, CASE (...) AS OFFICIAL_IDENTITY, CASE (...) FROM (...) client INNER JOIN ( dbo.ACCOUNTS account INNER JOIN ACCOUNT_TYPES accountType ON (account_type) LEFT JOIN DEDICATED_ACCOUNTS dedicated ON (account_number) ) ON (possibly anything...) OR (possibly the account tag...) OR (accountType.mapping_type = 1 AND dedicated.client_alias = client.alias) name alias OFFICIAL_IDENTITY perm TUNV FNF 111 ENH TUNV ACME 111AC ENH SIADV FNF 456 ENH AUNV FNF 222 ENH AUNV ACME 222AC ENH ACMEINV ACME 808AC ENH YOGADV YOGL 123 ST TUNV YOGL 111 ST AUNV YOGL 222 ST
  • 43. def mappingConditionsMet( mappingType: Rep[Int], accountTag: Rep[String], clientTag: Rep[String], clientAlias: Rep[String], dedicatedAlias: Rep[String] ):Rep[Int] = Case.If(mappingType === 0).Then(1) .If(mappingType === 2 && accountTag === clientTag).Then(1) .If(mappingType === 1 && clientAlias === dedicatedAlias).Then(1) .Else(0) def accountMapping(clients:Query[Client]): Query[ ClientsLifted,AccountsLifted,AccountTypesLifted,Option[Rep[DedicatedAccounts]] Clients,Accounts,AccountTypes,Option[DedicatedAccounts], Seq ] = { for { (account, accountType, dedicatedAccount) <- allAccounts client <- clients if (mappingConditionsMet( accountType.mappingType.getOrElse(0), account.tag.getOrElse(""), client.accountTag.getOrElse(""), client.alias.getOrElse(""), dedicatedAccount.map(_.clientAlias).flatten.getOrElse("")) == 1) } yield (client, account, accountType, dedicatedAccount) }
  • 44. val mappingConditionsMet = quote { ( mappingType: Option[Int], accountTag: Option[String], clientTag: Option[String], clientAlias: Option[String], dedicatedAlias: Option[Option[String]] ) => if (mappingType == 0) 1 else if ((mappingType == 2) && (accountTag ==~ clientTag)) 1 else if ((mappingType == 1) && (dedicatedAlias.exists(_ ~=~ clientAlias))) 1 else 0 } def accountMapping(clients:Quoted[Query[Client]]): Quoted[Query[(Client, Accounts, AccountTypes, Option[DedicatedAccounts])]] = quote { for { (account, accountType, dedicatedAccount) <- allAccounts client <- clients if ( mappingConditionsMet( accountType.mappingType, account.tag, client.accountTag, client.otherAlias, dedicatedAccount.map(_.clientAlias) ) == 1) } yield (client, account, accountType, dedicatedAccount) }
  • 45. val mappingConditionsMet: Quoted[(Int, String, Option[String], Option[String], Option[Option[String]]) => Int] = quote { ( mappingType: Option[Int], accountTag: Option[String], clientTag: Option[String], clientAlias: Option[String], dedicatedAlias: Option[Option[String]] ) => if (mappingType == 0) 1 else if ((mappingType == 2) && (accountTag ==~ clientTag)) 1 else if ((mappingType == 1) && (dedicatedAlias.exists(_ ~=~ clientAlias))) 1 else 0 } def accountMapping(clients:Quoted[Query[Client]]): Quoted[Query[(Client, Accounts, AccountTypes, Option[DedicatedAccounts])]] = quote { for { (account, accountType, dedicatedAccount) <- allAccounts client <- clients if ( mappingConditionsMet( accountType.mappingType, account.tag, client.accountTag, client.otherAlias, dedicatedAccount.map(_.clientAlias) ) == 1) } yield (client, account, accountType, dedicatedAccount) }
  • 46. Actions DBIO.seq( TableQuery[Person] += Person("Joe", "Roe") ) DBIO.seq( TableQuery[Person] ++= Seq(Person("Joe", "Roe"), ...) ) quote { query[Person].insert(lift(Person("Joe", "Roe"))) } quote { liftQuery(List(Person("Joe", "Roe")), …) .foreach(e => query[Person].insert(e)) } Individual Bulk
  • 47. Actions Continued… DBIO.seq( TableQuery[Person].map(_.firstName) += ("Joe") ) quote { query[Person].insert(_.firstName -> lift("Joe")) } Insert Specific Columns
  • 48. Actions Continued… DBIO.seq( (TableQuery[Person].returning(person.id)) += Record(0, "1") ) DBIO.seq( (TableQuery[Person] ].returning(person.id)) ++= Seq(Person("Joe", "Roe"), ...) ) quote { query[Person].insert(lift(Record(0, "1"))).returning(_.id) } quote { liftQuery(List(Record(0, "1")), …) .foreach(e => query[Person].insert(e).returning(_.id)) } Inserting Returning Ids Individual Inserting Returning Ids Individual
  • 49. Show me the Queries
  • 50. SELECT s189.s137, s189.s138, s189.s139, s189.s140, s176."NAME", s176."TAG", s176."NUMBER", s176."TYPE", s177."ACCOUNT_TYPE", s177."MAPPING_TYPE", s47.s118, s47.s119, s47.s120 FROM "ACCOUNTS" s176 INNER JOIN "ACCOUNT_TYPES" s177 ON s176."TYPE" = s177."ACCOUNT_TYPE" LEFT OUTER JOIN ( SELECT 1 AS s118, "ACCOUNT_NUMBER" AS s119, "CLIENT_ALIAS" AS s120 FROM "DEDICATED_ACCOUNTS") s47 ON s176."NUMBER" = s47.s119 INNER JOIN ( SELECT s179."ALIAS" AS s137, ? AS s138, s183."ORDER_PERMISSION" AS s139, s179."ACCOUNT_TAG" AS s140 FROM "SERVICE_CLIENTS" s179, "REGISTRY" s180, "PARTNERSHIPS" s183 WHERE ((((CASE WHEN (s179."ALIAS" IS NULL) THEN ? ELSE cast(s179."ALIAS" AS VARCHAR(255)) END) = s180."ALIAS") AND (s180."RECORD_TYPE" = 'M')) AND (s180."MARKET" = 'us')) AND (s183."ID" = s179."PARTNERSHIP_FK") UNION ALL SELECT s185."ALIAS" AS s137, s185."CODE" AS s138, s185."ORDER_PERMISSION" AS s139, s185."ACCOUNT_TAG" AS s140 FROM "MERCHANT_CLIENTS" s185, "REGISTRY" s186 WHERE (((CASE WHEN (s185."ALIAS" IS NULL) THEN ? ELSE cast(s185."ALIAS" AS VARCHAR(255)) END) = s186."ALIAS") AND (s186."RECORD_TYPE" = 'M')) AND (s186."MARKET" = 'us')) s189 ON (CASE WHEN ((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 0) THEN 1 WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 2) AND ((CASE WHEN (s176."TAG" IS NULL) THEN ? ELSE cast(s176."TAG" AS VARCHAR(255)) END) = (CASE WHEN (s189.s140 IS NULL) THEN ? ELSE cast(s189.s140 AS VARCHAR(255)) END))) THEN 1 WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 1) AND ((CASE WHEN (s189.s137 IS NULL) THEN ? ELSE cast(s189.s137 AS VARCHAR(255)) END) = (CASE WHEN ((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120 ELSE NULL END) IS NULL) THEN ? ELSE cast((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120 ELSE NULL END) AS VARCHAR(255)) END))) THEN 1 ELSE 0 END) = 1
  • 51. SELECT s189.s137, s189.s138, s189.s139, s189.s140, s176."NAME", s176."TAG", s176."NUMBER", s176."TYPE", s177."ACCOUNT_TYPE", s177."MAPPING_TYPE", s47.s118, s47.s119, s47.s120 FROM "ACCOUNTS" s176 INNER JOIN "ACCOUNT_TYPES" s177 ON s176."TYPE" = s177."ACCOUNT_TYPE" LEFT OUTER JOIN ( SELECT 1 AS s118, "ACCOUNT_NUMBER" AS s119, "CLIENT_ALIAS" AS s120 FROM "DEDICATED_ACCOUNTS") s47 ON s176."NUMBER" = s47.s119 INNER JOIN ( SELECT s179."ALIAS" AS s137, ? AS s138, s183."ORDER_PERMISSION" AS s139, s179."ACCOUNT_TAG" AS s140 FROM "SERVICE_CLIENTS" s179, "REGISTRY" s180, "PARTNERSHIPS" s183 WHERE ((((CASE WHEN (s179."ALIAS" IS NULL) THEN ? ELSE cast(s179."ALIAS" AS VARCHAR(255)) END) = s180."ALIAS") AND (s180."RECORD_TYPE" = 'M')) AND (s180."MARKET" = 'us')) AND (s183."ID" = s179."PARTNERSHIP_FK") UNION ALL SELECT s185."ALIAS" AS s137, s185."CODE" AS s138, s185."ORDER_PERMISSION" AS s139, s185."ACCOUNT_TAG" AS s140 FROM "MERCHANT_CLIENTS" s185, "REGISTRY" s186 WHERE (((CASE WHEN (s185."ALIAS" IS NULL) THEN ? ELSE cast(s185."ALIAS" AS VARCHAR(255)) END) = s186."ALIAS") AND (s186."RECORD_TYPE" = 'M')) AND (s186."MARKET" = 'us')) s189 ON (CASE WHEN ((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 0) THEN 1 WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 2) AND ((CASE WHEN (s176."TAG" IS NULL) THEN ? ELSE cast(s176."TAG" AS VARCHAR(255)) END) = (CASE WHEN (s189.s140 IS NULL) THEN ? ELSE cast(s189.s140 AS VARCHAR(255)) END))) THEN 1 WHEN (((CASE WHEN (s177."MAPPING_TYPE" IS NULL) THEN ? ELSE cast(s177."MAPPING_TYPE" AS INTEGER) END) = 1) AND ((CASE WHEN (s189.s137 IS NULL) THEN ? ELSE cast(s189.s137 AS VARCHAR(255)) END) = (CASE WHEN ((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120 ELSE NULL END) IS NULL) THEN ? ELSE cast((CASE WHEN (s47.s118 IS NOT NULL) THEN s47.s120 ELSE NULL END) AS VARCHAR(255)) END))) THEN 1 ELSE 0 END) = 1
  • 52. SELECT client.other_alias, client.code, client.order_permission, client.account_tag, account_type.name, account_type.tag, account_type.number, account_type.type, account_type.account_type, account_type.mapping_type, x11.account_number, x11.client_alias FROM (SELECT account.type type, account.name name, account.number number, account.tag tag, account_type.account_type account_type, account_type.mapping_type mapping_type FROM accounts account, account_types account_type WHERE account.type = account_type.account_type) account_type LEFT JOIN dedicated_accounts x11 ON x11.account_number = account_type.number, ( (SELECT sc.account_tag account_tag, sc.alias other_alias, ? code, part.order_permission order_permission FROM service_clients sc, registry r, partnerships part WHERE sc.alias = r.alias AND r.market = ? AND r.record_type = 'S' AND sc.partnership_fk = part.id) UNION ALL (SELECT mc.account_tag account_tag, mc.alias other_alias, mc.code code, mc.order_permission order_permission FROM merchant_clients mc, registry r1 WHERE mc.alias = r1.alias AND r1.market = ? AND r1.record_type = 'M') ) client WHERE CASE WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 0 THEN 1 WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 2 AND client.account_tag = CASE WHEN account_type.tag IS NOT NULL THEN ? ELSE '' END THEN 1 WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 1 AND client.other_alias = x11.client_alias THEN 1 ELSE 0 END = 1
  • 53. SELECT client.other_alias, client.code, client.order_permission, client.account_tag, account_type.name, account_type.tag, account_type.number, account_type.type, account_type.account_type, account_type.mapping_type, x11.account_number, x11.client_alias FROM (SELECT account.type type, account.name name, account.number number, account.tag tag, account_type.account_type account_type, account_type.mapping_type mapping_type FROM accounts account, account_types account_type WHERE account.type = account_type.account_type) account_type LEFT JOIN dedicated_accounts x11 ON x11.account_number = account_type.number, ( (SELECT sc.account_tag account_tag, sc.alias other_alias, ? code, part.order_permission order_permission FROM service_clients sc, registry r, partnerships part WHERE sc.alias = r.alias AND r.market = ? AND r.record_type = 'S' AND sc.partnership_fk = part.id) UNION ALL (SELECT mc.account_tag account_tag, mc.alias other_alias, mc.code code, mc.order_permission order_permission FROM merchant_clients mc, registry r1 WHERE mc.alias = r1.alias AND r1.market = ? AND r1.record_type = 'M') ) client WHERE CASE WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 0 THEN 1 WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 2 AND client.account_tag = CASE WHEN account_type.tag IS NOT NULL THEN ? ELSE '' END THEN 1 WHEN CASE WHEN account_type.mapping_type IS NOT NULL THEN ? ELSE 0 END = 1 AND client.other_alias = x11.client_alias THEN 1 ELSE 0 END = 1
  • 55. Usability Wins! Obvious Question: Why not just get rid of U
  • 57. Reliability – What are we measuring?
  • 58. Reliability – What are we measuring? Performance Under Load?
  • 59. Reliability – What are we measuring? Performance Under Load? Percentage Code Tested?
  • 60. Reliability – What are we measuring? Performance Under Load? Percentage Code Tested?
  • 61. Reliability – What are we measuring? Performance Under Load? Percentage Code Tested? Generating Queries Correctly
  • 62.
  • 63.
  • 64.
  • 65. @Entity public class A{ @OneToMany @JoinColumn(name= "fk") private List<B> bs= new ArrayList<B>(); } for(B b :a.bs) { doSomethingWith(b) }
  • 66. @Entity public class A{ @OneToMany @JoinColumn(name= "fk") private List<B> bs= new ArrayList<B>(); } for(B b :a.bs) { doSomethingWith(b) }
  • 67.
  • 68.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79. [10.11.2015 15:27:21.088] [ERROR] [application-akka.actor.default-dispatcher-12] [application] Error 'internalError - Cannot convertnode to SQLComprehension | GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])] | from s8: Bind :Vector[t17<{s18: String'}>] | from s13: Table test_query :Vector[@t11<{id: String'}>] | select: Pure t17 :Vector[t17<{s18: String'}>] | value:StructNode : {s18: String'} | s18: Path s13.id : String' | by: Path s8.s18 : String' 'occured.
  • 80. [10.11.2015 15:27:21.088] [ERROR] [application-akka.actor.default-dispatcher-12] [application] Error 'internalError - Cannot convertnode to SQLComprehension | GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])] | from s8: Bind :Vector[t17<{s18: String'}>] | from s13: Table test_query :Vector[@t11<{id: String'}>] | select: Pure t17 :Vector[t17<{s18: String'}>] | value:StructNode : {s18: String'} | s18: Path s13.id : String' | by: Path s8.s18 : String' 'occured.
  • 81. [10.11.2015 15:27:21.088] [ERROR] [application-akka.actor.default-dispatcher-12] [application] Error 'internalError - Cannot convertnode to SQLComprehension | GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])] | from s8: Bind :Vector[t17<{s18: String'}>] | from s13: Table test_query :Vector[@t11<{id: String'}>] | select: Pure t17 :Vector[t17<{s18: String'}>] | value:StructNode : {s18: String'} | s18: Path s13.id : String' | by: Path s8.s18 : String' 'occured. table.groupBy(_.id).map { case (c, tbl) => (c, tbl.length) }
  • 82. [10.11.2015 15:27:21.088] [ERROR] [application-akka.actor.default-dispatcher-12] [application] Error 'internalError - Cannot convertnode to SQLComprehension | GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])] | from s8: Bind :Vector[t17<{s18: String'}>] | from s13: Table test_query :Vector[@t11<{id: String'}>] | select: Pure t17 :Vector[t17<{s18: String'}>] | value:StructNode : {s18: String'} | s18: Path s13.id : String' | by: Path s8.s18 : String' 'occured. table.groupBy(_.id).map { case (c, tbl) => (c, tbl.length) }
  • 83. [10.11.2015 15:27:21.088] [ERROR] [application-akka.actor.default-dispatcher-12] [application] Error 'internalError - Cannot convertnode to SQLComprehension | GroupBy t9 : Vector[(t9<String'>, Vector[t17<{s18: String'}>])] | from s8: Bind :Vector[t17<{s18: String'}>] | from s13: Table test_query :Vector[@t11<{id: String'}>] | select: Puret17 :Vector[t17<{s18: String'}>] | value:StructNode : {s18: String'} | s18: Path s13.id : String' | by: Path s8.s18 : String' 'occured. table.groupBy(_.id).map { case (c, tbl) => (c, tbl.length) }
  • 84. Table .groupBy(_.id) .map { case (c, tbl) => (c, tbl.length) }
  • 85. Table .groupBy(_.id) .map { case (c, tbl) => (c, tbl.length) }
  • 86. Table .drop(0) .groupBy(_.id) .map { case (c, tbl) => (c, tbl.length) }
  • 87. … and the solution is:
  • 88.
  • 89.
  • 92. $ git log –p slick/src/main/scala/slick/compiler | awk ... | sort –rn Total Adds Deletes 2933 5166 2233 Stefan Zeiger 50 50 0 Alexander Ioffe 14 35 21 deusaquilus 7 11 4 Sue 1 12 11 Iulian Dogariu 1 7 6 Ashutosh Agarwal 0 4 4 Mateusz Kołodziejczyk -16 5 21 Ólafur Páll Geirsson Who Understands slick/src/main/scala/slick/compiler ???
  • 93. $ git log –p slick/src/main/scala/slick/compiler | awk ... | sort –rn Who Understands slick/src/main/scala/slick/compiler ??? Total Adds Deletes 97.1% 97.7% 97.1% Stefan Zeiger 1.7% 0.9% 0.0% Alexander Ioffe 0.5% 0.7% 0.9% deusaquilus 0.2% 0.2% 0.2% Sue 0.0% 0.2% 0.5% Iulian Dogariu 0.0% 0.1% 0.3% Ashutosh Agarwal 0.0% 0.1% 0.2% Mateusz Kołodziejczyk 0.5% 0.1% 0.9% Ólafur Páll Geirsson
  • 94. $ cat file.txt | awk '/./ && !author { author = $0; next } author { ins[author] += $1; del[author] += $2 } /^$/ { author = ""; next } END { for (a in ins) { printf "%10d %10d %10d %sn", ins[a] - del[a], ins[a], del[a], a } }' | sort –rn Total Adds Deletes 2933 5166 2233 Stefan Zeiger 50 50 0 Alexander Ioffe 14 35 21 deusaquilus 7 11 4 Sue 1 12 11 Iulian Dogariu 1 7 6 Ashutosh Agarwal 0 4 4 Mateusz Kołodziejczyk -16 5 21 Ólafur Páll Geirsson Who Understands slick/src/main/scala/slick/compiler ???
  • 95. $ cat file.txt | awk '/./ && !author { author = $0; next } author { ins[author] += $1; del[author] += $2 } /^$/ { author = ""; next } END { for (a in ins) { printf "%10d %10d %10d %sn", ins[a] - del[a], ins[a], del[a], a } }' | sort –rn Total Adds Deletes 2933 5166 2233 Stefan Zeiger 50 50 0 Alexander Ioffe 14 35 21 deusaquilus 7 11 4 Sue 1 12 11 Iulian Dogariu 1 7 6 Ashutosh Agarwal 0 4 4 Mateusz Kołodziejczyk -16 5 21 Ólafur Páll Geirsson Who Understands slick/src/main/scala/slick/compiler ???
  • 97.
  • 98.
  • 99. We require that each query in the host language generate exactly one SQL query. Alluding to twin perils Odysseus sought to skirt when navigating the straits of Medina, we seek to avoid Scylla and Charybdis. Scylla stands for the case where the system fails to generate a query, signalling an error. Charybdis stands for the case where the system generates multiple queries, hindering efficiency. The overhead of accessing a database is high, and to a first approximation cost is proportional to the number of queries. We particularly want to avoid a query avalanche, in the sense of Grust et al. (2010), where a single host query generates a number of SQL queries proportional to the size of the data
  • 100. Our work avoids these perils. For T-LINQ, we prove the Scylla and Charybdis theorem, characterising when a host query is guaranteed to generate a single SQL query. All our examples are easily seen to satisfy the characterisation in the theorem, and indeed our theory yields the same SQL query for each that one would write by hand. For P-LINQ, we verify that its run time on our examples is comparable to that of F# 2.0 and F# 3.0, in the cases where those systems generate a query, and significantly faster in the one case where F# 3.0 generates an avalanche—indeed, arbitrarily faster as the size of the data grows.
  • 101. Our work avoids these perils. For T-LINQ, we prove the Scylla and Charybdis theorem, characterising when a host query is guaranteed to generate a single SQL query. All our examples are easily seen to satisfy the characterisation in the theorem, and indeed our theory yields the same SQL query for each that one would write by hand. For P-LINQ, we verify that its run time on our examples is comparable to that of F# 2.0 and F# 3.0, in the cases where those systems generate a query, and significantly faster in the one case where F# 3.0 generates an avalanche—indeed, arbitrarily faster as the size of the data grows.
  • 102.
  • 103.
  • 104.
  • 106.
  • 107.
  • 108.
  • 109. It goes without saying… .length after groupBy + map works object TestQuery{ valq= quote{ query[TestQuery].groupBy(_.id).map{case(c,tbl) => (c, tbl.size) } } defrunQuery= run(q) }
  • 110.
  • 111. Reliability Wins! Obvious Question: Why not just rewrite Slick’s query compiler using Wadler’s Rules
  • 114. def countBy[E, U, K, T](query:Query[E, U, Seq])(predicate:E=>K)( implicit kshape: Shape[_ <: FlatShapeLevel, K, T, K], vshape: Shape[_ <: FlatShapeLevel, E, _, E]): Query[(K, Rep[Int]), (T, Int), scala.Seq] = { query.groupBy(predicate).map { case(field, records) => (field, records.length) } } val q = countBy(accounts)(_.`type`) def conditionalTake[E, U, T <% Ordered]( query:Query[E, U, Seq], numRecords:Option[Int]): Query[E, U, Seq] = { numRecords match { case Some(number) => query.take(number) case None => query } } val q = conditionalTake(accounts, Some(10)) type count ADVERTISING 4 INVENTORY 3 TAX 1 http://host/query?nr=100 NAME TAG NUMBER TYPE TUNV NULL 111 TAX YOGADV YOG 123 ADVERTISING SIADV SID 456 ADVERTISING SIADVA SIDA 457 ADVERTISING UMBINV NULL 707 INVENTORY FFADV FF 789 ADVERTISING ACMEINV NULL 808 INVENTORY YOGINV NULL 909 INVENTORY
  • 115. def countBy[E, U, K, T](query:Query[E, U, Seq])(predicate:E=>K)( implicit kshape: Shape[_ <: FlatShapeLevel, K, T, K], vshape: Shape[_ <: FlatShapeLevel, E, _, E]): Query[(K, Rep[Int]), (T, Int), scala.Seq] = { query.groupBy(predicate).map { case(field, records) => (field, records.length) } } val q = countBy(accounts)(_.`type`) def conditionalTake[E, U, T <% Ordered]( query:Query[E, U, Seq], numRecords:Option[Int]): Query[E, U, Seq] = { numRecords match { case Some(number) => query.take(number) case None => query } } val q = conditionalTake(accounts, Some(10)) http://host/query?nr=100 NAME TAG NUMBER TYPE TUNV NULL 111 TAX YOGADV YOG 123 ADVERTISING SIADV SID 456 ADVERTISING SIADVA SIDA 457 ADVERTISING UMBINV NULL 707 INVENTORY FFADV FF 789 ADVERTISING ACMEINV NULL 808 INVENTORY YOGINV NULL 909 INVENTORY type count ADVERTISING 4 INVENTORY 3 TAX 1
  • 116. def conditionalTake[E](query:Quoted[Query[E]], take:Option[Int]) = take match { case Some(num) => quote { query.take(lift(num)) } case None => quote { query } } val q = quote {conditionalTake(accounts)(_.`type`) } def countBy[E, K] = quote { (query:Query[E]) => (predicate: E=>K) => query.groupBy(e => predicate(e)).map { case (field, records) => (field, records.size) } } val q = quote { countBy(accounts)(_.`type`) }
  • 117. def conditionalTake[E](query:Quoted[Query[E]], take:Option[Int]) = take match { case Some(num) => quote { query.take(lift(num)) } case None => quote { query } } val q = quote { countBy(accounts)(Some(10)) } def countBy[E, K] = quote { (query:Query[E]) => (predicate: E=>K) => query.groupBy(e => predicate(e)).map { case (field, records) => (field, records.size) } } val q = quote { countByAlt(accounts)(_.`type`) } Also takes Quoted[Query[E]] ... but IDEs don’t always understand that
  • 118. Extension Friendliness Custom Outputs (from yields, maps, etc...) v.s. Queries
  • 119. trait CustomerDef { this: ProfileComponent => import profile.api._ case class Client(alias: Rep[Option[String]], code: Rep[Option[String]], permission: Rep[Option[Char]], tag: Rep[Option[String]]) case class ClientRow(alias: Option[String], code: Option[String], permission: Option[Char], tag: Option[String]) implicit object ClientRecordShape extends CaseClassShape[ Product, (Rep[Option[String]], Rep[Option[String]], Rep[Option[Char]], Rep[Option[String]]), Client, (Option[String], Option[String], Option[Char], Option[String]), ClientRow](Client.tupled, ClientRow.tupled) } Client (alias: Option[String], code: Option[String], permission: Option[String], tag: Option[String])
  • 120. Structural Type-Based Extensions case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime)
  • 121. Structural Type-Based Extensions case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime) implicit class TemporalObjectExtensions[T <: { def startedAt: LocalDateTime; def endedAt: LocalDateTime }]( records:Query[T]) { def existedAt(date: LocalDateTime) = quote { records.filter(r => (lift(r.startedAt) < date) && lift(r.endedAt) > date)) } } val q = quote { query[ShippingOrder].existedAt(lift(now)) } val q = quote { query[ProcessRequest].existedAt(lift(now)) } val q = quote { query[Event].existedAt(lift(now)) }
  • 122. No Infix Date Ops? case class ShippingOrder(customerId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class ProcessRequest(regionId:Int, startedAt:LocalDateTime, endedAt:LocalDateTime) case class Event(typeCode:String, startedAt:LocalDateTime, endedAt:LocalDateTime) implicit class TemporalObjectExtensions[T <: { def startedAt: LocalDateTime; def endedAt: LocalDateTime }]( records:Query[T]) { def existedAt(date: LocalDateTime) = quote { records.filter(r => (lift(r.startedAt) < date) && lift(r.endedAt) > date)) } } val q = quote { query[ShippingOrder].existedAt(lift(now)) } val q = quote { query[ProcessRequest].existedAt(lift(now)) } val q = quote { query[Event].existedAt(lift(now)) }
  • 123. No Problem! implicit classLocalDateOps(left:LocalDateTime){ def> (right:LocalDateTime)= quote(infix"$left >$right".as[Boolean]) def>= (right:LocalDateTime)= quote(infix"$left>=$right".as[Boolean]) def < (right:LocalDateTime)= quote(infix"$left <$right".as[Boolean]) def<= (right:LocalDateTime)= quote(infix"$left<=$right".as[Boolean]) }
  • 124. Extension Friendliness Wins! Obvious Question: (Again) Why not just get rid of U
  • 126. Our Query Comes out… def doPublish(ds:DataSource) = { val pub: DatabasePublisher[_ <: Product] = ds.stream(TableQuery[Accounts]result) .mapResult(r => serializeToJson(r)) var akkaSource = Source.fromPublisher(pub) .map(r => ChunkStreamPart(r)) HttpResponse(entity = HttpEntity.Chunked(format.contentType, akkaSource) ) }
  • 127. … of Here! val route = get { pathPrefix("path" / "from" / "host") { pathEndOrSingleSlash { ctx => ctx.complete { doPublish(ds) } } } }
  • 128. … of Here! val route = get { pathPrefix("path" / "from" / "host") { pathEndOrSingleSlash { ctx => ctx.complete { doPublish(ds) } } } }
  • 129. Streaming Wins! Obvious Question: Write Akka Extensions for Quill … or just move the Monix-based API to Quill-Jdbc from Quill-Cassandra?
  • 131. What if we can test Queries… class MyMemoryDriver extends ModifiedMemoryProfile { } class MyHeapDriver extends RelationalTestDB { type Driver = MemoryDriver val driver: Driver = new MyMemoryDriver }
  • 132. Without a Database! class AccountClientOrderMerchantSupplierTest extends FunSuite with BeforeAndAfter { val heapDriver = new MyHeapDriver val profile = heapDriver.driver.profile import profile.api._ before { initializeEntireSchema() } test("Create Accounts, Clients, Orders, Merchants, Suppliers and Test") { db.run(for { _ <- accounts ++= Seq(Account(...), ...) _ <- clients ++= Seq(Client(...), ...) _ <- orders ++= Seq(Order(...), ...) _ <- merchants ++= Seq(Merchant(...), ...) _ <- suppliers ++= Seq(Supplier(...), ...) q <- giantQueryThatCombinesEverything() _ = { assertRealityIsInLineWithExpectations(accounts, clients, orders, merchants, suppliers) } } yield ()) }
  • 133. Without a Database! • Create Schema (~100 `Tables`) • Initialize Schema with Dozens of Records • Run Integration Tests with Real Production Data • 200 ~ 400ms Per Test
  • 134. Testing Wins! Obvious Question: Why not write a Memory Driver for Quill AST, take some notes from slick.memory.QuoterInterpreter ?
  • 136. • Event Sourcing (softwaremill/slick-eventsourcing) • Data Migration (lastland/scala-forklift) • Cats Integration (RMSone/slick-cats) • Shapeless Integration (underscoreio/slickless) • Blocking API (takezoe/blocking-slick) • Cache (mslinn/quill-gen) • Annotated Traits (nstojiljkovic/quill-trait) General
  • 137. • Quill Gen (mslinn/quill-gen) • “type all the things”-style Generator (olafurpg/scala-db-codegen) • slick.codegen.SourceCodeGenerator • Codegen Plugin for SBT (tototoshi/sbt-slick-codegen) Code Generation
  • 138. • Quill Gen (mslinn/quill-gen) • Annotated Traits (nstojiljkovic/quill-trait) • “type all the things”-style Generator (olafurpg/scala-db-codegen) • slick.codegen.SourceCodeGenerator • Codegen Plugin for SBT (tototoshi/sbt-slick-codegen) Code Creation / Generation
  • 140. Bonus
  • 141. DataFrame API orders.as("o") // Customer has a Location .join(customers.as("c"), customers("id") === orders("customer")) .join(destinations.as("d"), destinations("id") === customers("destination")) // Supplier has a destination .join(suppliers.as("s"), suppliers("id") === orders("supplier")) .join(warehouses.as("w"), warehouses("supplier") === suppliers("id")) .where(warehouses("address") === destinations("address")) .select( $"o.timePlaced", $"c.firstName", $"c.LastName", $"w.address", $"d.address"
  • 142. Spark 1.6.x Version The Spark Dataset API brings the best of RDD and Data Frames together, for type safety and user functions that run directly on existing JVM types. A Dataset is a strongly-typed, immutable collection of objects that are mapped to a relational schema.
  • 143. Dataset API orders.as("o") // Customer has a Location .join(customers.as("c"), customers("id") === orders("customer")) .join(destinations.as("d"), destinations("id") === customers("destination")) // Supplier has a destination .join(suppliers.as("s"), suppliers("id") === orders("supplier")) .join(warehouses.as("w"), warehouses("supplier") === suppliers("id")) .where(warehouses("address") === destinations("address")) .select( $"o.timePlaced", $"c.firstName", $"c.LastName", $"w.address", $"d.address"
  • 144. Back to the Trenches? select o.timePlaced, c.firstName, c.LastName, w.address, d.address from orders o join customers c on c.id = o.customer join destinations d on d.id = c.destination join suppliers s on s.id = o.supplier join warehouses w on w.supplier = s.id where warehouses.address = destinations.address
  • 145. Salvation Cometh... in the form of a QuillSparkContext def sameAreaOrder = quote { for { o <- orders c <- customers if (c.id === o.customer) d <- destinations if (d.id === c.destination) s <- suppliers if (s.id === o.supplier) w <- warehouses if (w.supplier === s.id) } yield ( o.timePlaced, c.firstName, c.LastName, w.address, d.address ) }
  • 146. def sameAreaOrder = quote { for { o <- query[Orders] c <- query[Customers] if (c.id === o.customer) d <- query[Destinations] if (d.id === c.destination) s <- query[Suppliers] if (s.id === o.supplier) w <- query[Warehouses] if (w.supplier === s.id) } yield ( o.timePlaced, c.firstName, c.LastName, w.address, d.address ) } Salvation Cometh... in the form of a QuillSparkContext
  • 147. Let’s Take a Step Back
  • 150. Future of SQL Likely Here
  • 151. Every single one of these already has a JDBC Driver!

Notas do Editor

  1. The RG device. YS
  2. Stuff that must be done in database schemas, or spark dataframes. Future like past sto
  3. EU_Client_accounts almost all technical debt, pretend inline udf exists that can return parameterized views
  4. EU_Client_accounts almost all technical debt
  5. EU_Client_accounts almost all technical debt
  6. Whenever introduce new business unit, copy almost all code
  7. Last time I mentioned Quill doesn’t support CC yields, Brian asked how to do >22 arity…. Quill always had a ‘Unlimited Tuple type’ Added CC in yields as of 2.2.0 now it’s on 2.3.1
  8. Note, that union operator is possible because same type is returned
  9. Note, that union operator is possible because same type is returned
  10. Option.flatten and Option.flatMap doesn’t exist.
  11. Have to use the pattern for val/def myMethod = quote { (a,b,c) => output } for passing individual fields
  12. Slick API more closely resuembles scala collections (sto), quill tries to be more natural to the use case. ONLY ALTERNATIVE TO batch inserts in thousands and then using them for other inserts. I.e. use those ids to create foreign keys in another column in another table.
  13. Paper published 2013 sto
  14. Russian phrase, system that has multiple levels of self delusion, sto, fractally self deleted
  15. Paper published 2013
  16. 2015
  17. Note that upper query cannot be generated at compile time
  18. I.e. these classes were produced by a code generator, don’t always know which generator outputs will have them or not (because these are parquet files in a data lake)
  19. Half hour test suite vs half minute test suite. Can test an ETL system this way simply
  20. Sto, they all implemented SQL? Why? Is there really nothing better?
  21. Sto, they all implemented SQL? Why? Is there really nothing better?