concurrency

Timeout while waiting for a batch of Futures to complete?

|▌冷眼眸甩不掉的悲伤 提交于 2020-12-30 06:48:40
问题 I have a set of Futures created by submitting Callable s to an Executor . Pseudo code: for all tasks futures.add(executor.submit(new callable(task))) Now I'd like to get all futures waiting at most n seconds until all complete. I know I can call Future#get(timeout) but if I call that sequentially for all my futures in a loop the timouts start adding up. Pseudo code: for all futures future.get(timeout) get blocks with a timeout until the result is ready. Therefore, if the first completes just

How to change external variable's value inside a goroutine closure

你说的曾经没有我的故事 提交于 2020-12-30 04:10:00
问题 func (this *l) PostUpload(ctx *Context) { //ctx.Response.Status = 500 l, err := models.NewL(this.Config) go func() { err = l.Save(file) if err != nil { ctx.Response.Status = 500 ctx.Response.Body = err } else { ctx.Response.Status = 204 } }() } How to change ctx.Response.Status value inside the goroutine closure? 回答1: You have no guarantee to observe changes made to the value of a variable in another goroutine without synchronization . See The Go Memory Model for details. So if you want to

How to change external variable's value inside a goroutine closure

若如初见. 提交于 2020-12-30 04:01:29
问题 func (this *l) PostUpload(ctx *Context) { //ctx.Response.Status = 500 l, err := models.NewL(this.Config) go func() { err = l.Save(file) if err != nil { ctx.Response.Status = 500 ctx.Response.Body = err } else { ctx.Response.Status = 204 } }() } How to change ctx.Response.Status value inside the goroutine closure? 回答1: You have no guarantee to observe changes made to the value of a variable in another goroutine without synchronization . See The Go Memory Model for details. So if you want to

Handling race conditions in PostgreSQL

狂风中的少年 提交于 2020-12-29 03:48:11
问题 I have several workers, each holding its own connection to PostgreSQL. The workers manipulate with different tables. The workers handle parallel requests from outside the system. One of the tables being accessed is the table of users. When some information comes, I first need to ensure there is a record for the user in the table. If there is no record, I wish to create one at first. I'm using the following idiom: if [user does not exist] then [create user] The code of [user does not exist] is

Handling race conditions in PostgreSQL

末鹿安然 提交于 2020-12-29 03:47:52
问题 I have several workers, each holding its own connection to PostgreSQL. The workers manipulate with different tables. The workers handle parallel requests from outside the system. One of the tables being accessed is the table of users. When some information comes, I first need to ensure there is a record for the user in the table. If there is no record, I wish to create one at first. I'm using the following idiom: if [user does not exist] then [create user] The code of [user does not exist] is

Does volatile qualifier cancel caching for this memory?

做~自己de王妃 提交于 2020-12-28 20:35:32
问题 In this article: http://www.drdobbs.com/parallel/volatile-vs-volatile/212701484?pgno=2 says, that we can't do any optimization for volatile , even such as (where: volatile int& v = *(address); ): v = 1; // C: write to v local = v; // D: read from v can't be optimized to this: v = 1; // C: write to v local = 1; // D: read from v // but it can be done for std::atomic<> It is can't be done, because between 1st and 2nd lines may v value be changed by hardware device (not CPU where can't work

Writing from multiple processes launched via xargs to the same fifo pipe causes lines to miss

别等时光非礼了梦想. 提交于 2020-12-15 07:14:38
问题 I have a script where I parallelize job execution while monitoring the progress. I do this using xargs and a named fifo pipe. My problem is that I while xargs performs well, some lines written to the pipe are lost. Any idea what the problem is? For example the following script (basically my script with dummy data) will produce the following output and hangs at the end waiting for those missing lines: $ bash test2.sh Progress: 0 of 99 DEBUG: Processed data 0 in separate process Progress: 1 of

Refactor code to use a single channel in an idiomatic way

隐身守侯 提交于 2020-12-13 03:13:10
问题 I have the following code: package main import ( "fmt" "time" ) type Response struct { Data string Status int } func main() { var rc [10]chan Response for i := 0; i < 10; i++ { rc[i] = make(chan Response) } var responses []Response for i := 0; i < 10; i++ { go func(c chan<- Response, n int) { c <- GetData(n) close(c) }(rc[i], i) } for _, resp := range rc { responses = append(responses, <-resp) } for _, item := range responses { fmt.Printf("%+v\n", item) } } func GetData(n int) Response { time

Ruby MRI 1.8.7 - File writing thread safety

半城伤御伤魂 提交于 2020-12-10 07:57:10
问题 It seems to me that file writing in Ruby MRI 1.8.7 is completely thread safe. Example 1 - Flawless Results: File.open("test.txt", "a") { |f| threads = [] 1_000_000.times do |n| threads << Thread.new do f << "#{n}content\n" end end threads.each { |t| t.join } } Example 2 - Flawless Results (but slower): threads = [] 100_000.times do |n| threads << Thread.new do File.open("test2.txt", "a") { |f| f << "#{n}content\n" } end end threads.each { |t| t.join } So, I couldn't reconstruct a scenario

Ruby MRI 1.8.7 - File writing thread safety

大城市里の小女人 提交于 2020-12-10 07:57:08
问题 It seems to me that file writing in Ruby MRI 1.8.7 is completely thread safe. Example 1 - Flawless Results: File.open("test.txt", "a") { |f| threads = [] 1_000_000.times do |n| threads << Thread.new do f << "#{n}content\n" end end threads.each { |t| t.join } } Example 2 - Flawless Results (but slower): threads = [] 100_000.times do |n| threads << Thread.new do File.open("test2.txt", "a") { |f| f << "#{n}content\n" } end end threads.each { |t| t.join } So, I couldn't reconstruct a scenario