google-cloud-spanner

Way to prevent transaction timeout?

谁说胖子不能爱 提交于 2019-12-11 06:06:10
问题 I'm running read-write transactions that are taking longer than 10 seconds, and they are timing out (failing with ABORTED errors). Is there a way to specify a longer timeout? 回答1: There is no way to specify the timeout for a transaction, but you have a few options: You could periodically issue an executeSql request every 5-8 seconds to keep your transaction alive. You can do a trivial query like SELECT 1 . More info on idle transactions is here. You could use a read-only transaction instead

Is it possible to rename columns?

笑着哭i 提交于 2019-12-11 05:59:45
问题 Is it possible to issue something like RENAME COLUMN col1 col2 in Google Cloud Spanner? It looks from the DDL that this isn't possible; if not, is this a design choice or a limitation whilst in Beta? 回答1: No, this is not possible. Currently you can only do the following with regard to altering columns in a table: Add a new one Delete an existing one, unless it's a key column Change delete behavior (cascading or not) Convert between STRING and BYTES Change length of STRING and BYTES Add or

How to export Google spanner query results to .csv or google sheets?

安稳与你 提交于 2019-12-11 02:56:32
问题 I am new to google spanner and I have run a query and found about 50k rows of data. I want to export that resultset to my local machine like .csv or into a google sheet. Previously I have used TOAD where I have an export button, but here I do not see any of those options. Any suggestions please. 回答1: The gcloud spanner databases execute-sql command allows you to run SQL statements on the command line and redirect output to a file. The --format=csv global argument should output in CSV. https:/

How to batch load custom Avro data generated from another source?

梦想的初衷 提交于 2019-12-10 23:21:59
问题 The Cloud Spanner docs say that Spanner can export/import Avro format. Can this path also be used for batch ingestion of Avro data generated from another source? The docs seem to suggest it can only import Avro data that was also generated by Spanner. I ran a quick export job and took a look at the generated files. The manifest and schema look pretty straight forward. I figured I would post here in case this rabbit hole is deep. manifest file ' { "files": [{ "name": "people.avro-00000-of

google Cloud spanner java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been properly configured

不打扰是莪最后的温柔 提交于 2019-12-10 15:38:59
问题 I am new to the Google cloud Spanner and to explore it I started with documentation provided by google Here. To explore any database we start with data operations and the same I did, I started with writing data to the spanner using simple java application given here https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/spanner/cloud-client/src/main/java/com/example/spanner/SpannerSample.java. I have made changes in driver class on respective places shown in following code

Local development with cloud-spanner

时光怂恿深爱的人放手 提交于 2019-12-09 14:07:53
问题 Is there any way to do local development with cloud spanner? I've taken a look through the docs and the CLI tool and there doesn't seem to be anything there. Alternatively, can someone suggest a SQL database that behaves similarly for reads (not sure what to do about writes)? EDIT: To clarify, I'm looking for a database which speaks the same flavour of SQL as Cloud Spanner so I can do development locally. The exact performance characteristics are not as important as the API and consistency

Streaming MutationGroups into Spanner

一笑奈何 提交于 2019-12-07 10:05:49
问题 I'm trying to stream MutationGroups into spanner with SpannerIO. The goal is to write new MuationGroups every 10 seconds, as we will use spanner to query near-time KPI's. When I don't use any windows, I get the following error: Exception in thread "main" java.lang.IllegalStateException: GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger. Use a Window.into or Window.triggering transform prior to GroupByKey. at org.apache.beam.sdk.transforms.GroupByKey

Google-Cloud: Jetty ALPN/NPN has not been properly configured

南笙酒味 提交于 2019-12-06 07:21:41
问题 Getting exception whilst using Google Pubsub to list topics, my web application is running on tomcat. public static List<String> listTopics(GcpCredentials gcCredentials, String project) throws GCPException, IOException { List<String> topics = new ArrayList<>(); TopicAdminClient client = getTopicClient(gcCredentials); ProjectName projectName = ProjectName.create(project); ListTopicsPagedResponse response = client.listTopics(projectName); for (Topic topic :response.iterateAll()) { topics.add

How do I implement pagination?

旧城冷巷雨未停 提交于 2019-12-05 21:39:30
I have a People table (Id, first_name, last_name) , where the primary key is id . I want to be able to look up the first N people in the table ordered by (last_name, first_name, Id) . In some cases, I need to lookup the next N people, and so on. I want to do this efficiently. What is the best way to do this? There are two primary ways: Use LIMIT and OFFSET Use LIMIT and key-of-previous-page The OFFSET strategy lets you read an arbitrary page, but is not efficient since each time the query runs it must read the rows from all previous pages. It is the easiest to implement and can be an

What is the TrueTime API in Google's Spanner?

三世轮回 提交于 2019-12-05 11:37:23
I tried to read the document multiple times but failed to understand it. Can someone explain it in layman's terms? TrueTime is an API available at Google that directly exposes clock uncertainty. Comparing to standard datetime libraries , instead of a particular timestamp, TrueTime's now() function returns an interval of time [earliest, latest]. It also provides two functions: after(t) returns true if t has definitely passed. E.g. t < now().earliest . before(t) returns true if t has definitely not arrived, or t > now().latest . What's impressive, is that the implementation of now() returns