Using Auto Incrementing fields with PostgreSQL and Slick

蹲街弑〆低调 提交于 2019-11-29 06:42:04

问题


How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:

//import scala.slick.driver.H2Driver.simple._
//import scala.slick.driver.BasicProfile.SimpleQL.Table
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession

object TestMappedTable extends App{

    case class User(id: Option[Int], first: String, last: String)

    object Users extends Table[User]("users") {
        def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
        def first = column[String]("first")
        def last = column[String]("last")
        def * = id.? ~ first ~ last <> (User, User.unapply _)
        def ins1 = first ~ last returning id
        val findByID = createFinderBy(_.id)
        def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id
    }

 // implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
    implicit val session = Database.forURL("jdbc:postgresql:test:slicktest",
                           driver="org.postgresql.Driver",
                           user="postgres",
                           password="xxx")

  session.withTransaction{
    Users.ddl.create

    // insert data
    print(Users.insert(User(None, "Jack", "Green" )))
    print(Users.insert(User(None, "Joe", "Blue" )))
    print(Users.insert(User(None, "John", "Purple" )))
    val u = Users.insert(User(None, "Jim", "Yellow" ))
  //  println(u.id.get)
    print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" )))
  }
  session.withTransaction{
    val queryUsers = for {
    user <- Users
  } yield (user.id, user.first)
  println(queryUsers.list)

  Users.where(_.id between(1, 2)).foreach(println)
  println("ID 3 -> " + Users.findByID.first(3))
  }
}

Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:

[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint

回答1:


This is working here:

object Application extends Table[(Long, String)]("application") {   
    def idlApplication = column[Long]("idlapplication", O.PrimaryKey, O.AutoInc)
    def appName = column[String]("appname")
    def * = idlApplication ~ appName
    def autoInc = appName returning idlApplication
}

var id = Application.autoInc.insert("App1")

This is how my SQL looks:

CREATE TABLE application
(idlapplication BIGSERIAL PRIMARY KEY,
appName VARCHAR(500));

Update:

The specific problem with regard to a mapped table with User (as in the question) can be solved as follows:

  def forInsert = first ~ last <>
    ({ (f, l) => User(None, f, l) }, { u:User => Some((u.first, u.last)) })

This is from the test cases in the Slick git repository.




回答2:


I tackled this problem in an different way. Since I expect my User objects to always have an id in my application logic and the only point where one would not have it is during the insertion to the database, I use an auxiliary NewUser case class which doesn't have an id.

case class User(id: Int, first: String, last: String)
case class NewUser(first: String, last: String)

object Users extends Table[User]("users") {
  def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
  def first = column[String]("first")
  def last = column[String]("last")

  def * = id ~ first ~ last <> (User, User.unapply _)
  def autoInc = first ~ last <> (NewUser, NewUser.unapply _) returning id
}

val id = Users.autoInc.insert(NewUser("John", "Doe"))

Again, User maps 1:1 to the database entry/row while NewUser could be replaced by a tuple if you wanted to avoid having the extra case class, since it is only used as a data container for the insert invocation.

EDIT: If you want more safety (with somewhat increased verbosity) you can make use of a trait for the case classes like so:

trait UserT {
  def first: String
  def last: String
}
case class User(id: Int, first: String, last: String) extends UserT
case class NewUser(first: String, last: String) extends UserT
// ... the rest remains intact

In this case you would apply your model changes to the trait first (including any mixins you might need), and optionally add default values to the NewUser.

Author's opinion: I still prefer the no-trait solution as it is more compact and changes to the model are a matter of copy-pasting the User params and then removing the id (auto-inc primary key), both in case class declaration and in table projections.




回答3:


We're using a slightly different approach. Instead of creating a further projection, we request the next id for a table, copy it into the case class and use the default projection '*' for inserting the table entry.

For postgres it looks like this:

Let your Table-Objects implement this trait

trait TableWithId { this: Table[_] =>
  /**
   * can be overriden if the plural of tablename is irregular
   **/
  val idColName: String = s"${tableName.dropRight(1)}_id"
  def id = column[Int](s"${idColName}", O.PrimaryKey, O.AutoInc)
  def getNextId = (Q[Int] + s"""select nextval('"${tableName}_${idColName}_seq"')""").first
  }

All your entity case classes need a method like this (should also be defined in a trait):

case class Entity (...) {
  def withId(newId: Id): Entity = this.copy(id = Some(newId)
}

New entities can now be inserted this way:

object Entities extends Table[Entity]("entities") with TableWithId {
  override val idColName: String = "entity_id"
  ...
  def save(entity: Entity) = this insert entity.withId(getNextId) 
}

The code is still not DRY, because you need to define the withId method for each table. Furthermore you have to request the next id before you insert an entity which might lead to performance impacts, but shouldn't be notable unless you insert thousands of entries at a time.

The main advantage is that there is no need for a second projection what makes the code less error prone, in particular for tables having many columns.




回答4:


Another trick is making the id of the case class a var

case class Entity(var id: Long)

To insert an instance, create it like below Entity(null.asInstanceOf[Long])

I've tested that it works.




回答5:


I've faced the same problem trying to make the computer-database sample from play-slick-3.0 when I changed the db to Postgres. What solved the problem was to change the id column (primary key) type to SERIAL in the evolution file /conf/evolutions/default/1.sql (originally was in BIGINT). Take a look at https://groups.google.com/forum/?fromgroups=#%21topic/scalaquery/OEOF8HNzn2U
for the whole discussion. Cheers, ReneX




回答6:


The solution I've found is to use SqlType("Serial") in the column definition. I haven't tested it extensively yet, but it seems to work so far.

So instead of

def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", O.PrimaryKey, O.AutoInc)

You should do:

def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)

Where PK is defined like the example in the "Essential Slick" book:

final case class PK[A](value: Long = 0L) extends AnyVal with MappedTo[Long]


来源:https://stackoverflow.com/questions/13199198/using-auto-incrementing-fields-with-postgresql-and-slick

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!