google-cloud-spanner

Storing a UUID in Cloud Spanner

末鹿安然 提交于 2020-01-03 14:36:11
问题 I would like to use a UUID as a primary key in Cloud Spanner. What is the best way to read and write UUIDs? Is there a UUID type, or client library support? 回答1: The simplest solution is just to store it as a STRING in the standard RFC 4122 format. E.g.: " d1a0ce61-b9dd-4169-96a8-d0d7789b61d9 " This will take 37 bytes to store (36 bytes plus a length byte). If you really want to save every possible byte, you could store your UUID as two INT64's. However, you would need to write your own

Is there a Cloud Spanner framework forthcoming?

荒凉一梦 提交于 2019-12-24 02:25:40
问题 I started looking at Google's Cloud Spanner and it certainly looks interesting. Since ruby has rails, MongoDB has Meteor, and RethinkDB has Horizon, is there any talk about Cloud Spanner having some sort of dedicated framework, or is any existing framework adapting to Cloud Spanner? Or is Cloud Spanner much too new to even consider this yet? 回答1: We don't plan to create a Cloud Spanner-specific framework, but hope to integrate it into all the existing ORMs and popular frameworks. The open

Google Spanner: JDBC Connection Strings?

落爺英雄遲暮 提交于 2019-12-24 01:56:27
问题 While Spanner looks exciting, the documentation for the Simba JDBC driver (included in the download links here: https://cloud.google.com/spanner/docs/partners/drivers) are relatively sparse, especially when compared to the documentation for the Simba JDBC BigQuery driver (https://cloud.google.com/bigquery/partners/simba-drivers/). In particular, the documentation only mentions one connection string: jdbc:cloudspanner://localhost;Project=simba-cloudspanner- jdbc;Instance=test-instance;Database

GCP table creation taking a long time to create

故事扮演 提交于 2019-12-23 03:29:16
问题 We're running into an issue where GCP takes a long time to create a table. Our instance is using 1 node and the database is currently at 23.73% CPU, 74.99MB size and has already 106 tables. I'm sure we're not hitting the limits yet as the max no. of table is 2048. Have anyone encountered this issue and possible solution? Thanks! More info: The time it takes to complete usually takes more than 2 hours and it happens from time to time when creating multiple tables. When all the tables

How do I implement pagination?

穿精又带淫゛_ 提交于 2019-12-22 10:34:07
问题 I have a People table (Id, first_name, last_name) , where the primary key is id . I want to be able to look up the first N people in the table ordered by (last_name, first_name, Id) . In some cases, I need to lookup the next N people, and so on. I want to do this efficiently. What is the best way to do this? 回答1: There are two primary ways: Use LIMIT and OFFSET Use LIMIT and key-of-previous-page The OFFSET strategy lets you read an arbitrary page, but is not efficient since each time the

Getting error while connecting to jpa using micronaut-data-hibernate-jpa library

你说的曾经没有我的故事 提交于 2019-12-22 08:53:08
问题 I want to use JPA for micronaut. For that I am using io.micronaut.data:micronaut-data-hibernate-jpa:1.0.0.M1 library. Whenever I run my application and hit the endpoint to get the data, I get the following error: { message: "Internal Server Error: No backing RepositoryOperations configured for repository. Check your configuration and try again" } I tried looking up for errors but I couldn't find one. Attaching my files here. Please help. build.gradle plugins { id "net.ltgt.apt-eclipse"

GCP: What is the best option to setup a periodic Data pipeline from Spanner to Big Query

試著忘記壹切 提交于 2019-12-22 01:32:41
问题 Task : We have to setup a periodic sync of records from Spanner to Big Query. Our Spanner database has a relational table hierarchy. Option Considered I was thinking of using Dataflow templates to setup this data pipeline. Option1 : Setup a job with Dataflow template 'Cloud Spanner to Cloud Storage Text' and then another with Dataflow template 'Cloud Storage Text to BigQuery'. Con : The first template works only on a single table and we have many tables to export. Option2 : Use 'Cloud Spanner

google dataflow read from spanner

∥☆過路亽.° 提交于 2019-12-18 09:38:25
问题 I am trying to read a table from a Google spanner database, and write it to a text file to do a backup, using google dataflow with the python sdk. I have written the following script: from __future__ import absolute_import import argparse import itertools import logging import re import time import datetime as dt import logging import apache_beam as beam from apache_beam.io import iobase from apache_beam.io import WriteToText from apache_beam.io.range_trackers import OffsetRangeTracker,

Google Spanner | java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been properly configured

梦想的初衷 提交于 2019-12-13 03:07:42
问题 So i have a problem that i've seen before but all attempts to use other fixes have not worked. I'm havinga problem with the Google Spanner, more specifically at the line: SpannerOptions options = SpannerOptions.newBuilder().build(); Where I receive the stack error of: WARNING: Error for /_ah/api/discovery/v1/apis/helloworld/v1/rest java.lang.ExceptionInInitializerError at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at com.google.api.server.spi

In Google Spanner, is it possible that the exact same commit timestamp can appear again after already observed

怎甘沉沦 提交于 2019-12-12 17:28:10
问题 In Google Spanner, commit timestamps are generated by the server and based on "TrueTime" as discussed in https://cloud.google.com/spanner/docs/commit-timestamp. This page also states that timestamps are not guarnateed to be unique, so multiple independent writers can generate timestamps that are exactly the same. On the documentation of consistency guarantees, it is stated that In addition if one transaction completes before another transaction starts to commit, the system guarantees that