O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Fundamental concurrent programming

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
Concurrency with Go
Concurrency with Go
Carregando em…3
×

Confira estes a seguir

1 de 24 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a Fundamental concurrent programming (20)

Anúncio

Mais recentes (20)

Anúncio

Fundamental concurrent programming

  1. 1. Fundamental Concurrent Programming Dimas Yudha P.
  2. 2. What we learn ? 1. Concurrent thread of executions (goroutine) 2. Basic synchronization technique (channels and locks) 3. Basic concurrency patterns in Go 4. Deadlock and data races 5. Parallel Computation
  3. 3. 1. Concurrent thread of executions (goroutine) Go permits starting a new thread of execution, a goroutine, using the go statement. It runs a function in a different, newly created, goroutine. All goroutines in a single program share the same address space. Internally goroutines are multiplexed onto multiple operating system threads. If one goroutine blocks an OS thread, for example waiting for input, other goroutines in this thread will migrate so that they may continue running.
  4. 4. Sample 1 (https://play.golang.org/p/VsTNshtBph) The following program will print "Hello from main goroutine". It might print "Hello from another goroutine", depending on which of the two goroutines finish first. func main() { go fmt.Println("Hello from another goroutine") fmt.Println("Hello from main goroutine") // At this point the program execution stops and all // active goroutines are killed. }
  5. 5. Sample 2 (https://play.golang.org/p/hBRcT8aH9d) Next program will, most likely, print both "Hello from main goroutine" and "Hello from another goroutine". They might be printed in any order. Yet another possibility is that the second goroutine is extremely slow and doesn’t print its message before the program ends. func main() { go fmt.Println("Hello from another goroutine") fmt.Println("Hello from main goroutine") time.Sleep(time.Second) // wait 1 sec for other goroutine to finish }
  6. 6. 2. Channels A channel is a Go language construct that provides a mechanism for two goroutines to synchronize execution and communicate by passing a value of a specified element type. The <- operator specifies the channel direction, send or receive. If no direction is given, the channel is bi-directional. chan Progress // can be used to send and receive values of type Progress chan<- float64 // can only be used to send float64s <-chan int // can only be used to receive ints
  7. 7. 2. Channels (Cont’d) Channels are a reference type and are allocated with make. ic := make(chan int) // unbuffered channel of ints wc := make(chan *Work, 10) // buffered channel of pointers to Work If the channel is unbuffered, the sender blocks until the receiver has received the value. If the channel has a buffer, the sender blocks only until the value has been copied to the buffer; if the buffer is full, this means waiting until some receiver has retrieved a value. Receivers block until there is data to receive.
  8. 8. 2. Channels (Cont’d) To send a value on a channel, use <- as a binary operator. To receive a value on a channel, use it as a unary operator. ic <- 3 // Send 3 on the channel. work := <-wc // Receive a pointer to Work from the channel.
  9. 9. 2. Channels (Cont’d) Close The close function records that no more values will be sent on a channel. After calling close, and after any previously sent values have been received, receive operations will return a zero value without blocking.
  10. 10. Sample 1 (https://play.golang.org/p/x4Kc7QNycL) func main() { ch := make(chan string) go func() { ch <- "Hello!" close(ch) }() fmt.Println(<-ch) // prints "Hello!" fmt.Println(<-ch) // prints the zero value "" without blocking fmt.Println(<-ch) // once again prints "" value, ok := <-ch // value is "", ok is false fmt.Println(value) fmt.Println(ok) }
  11. 11. 3. Deadlock A deadlock is a situation in which threads are waiting for each other and none of them is able to proceed.
  12. 12. Sample 1 (https://play.golang.org/p/TOzTbHYwge) func main() { ch := make(chan string) go func() { ch <- "Hello!" //close(ch) // if we not closing the channel, so what is happen? }() fmt.Println(<-ch) // prints "Hello!" fmt.Println(<-ch) // prints the zero value "" without blocking fmt.Println(<-ch) // once again prints "" value, ok := <-ch // value is "", ok is false fmt.Println(value) fmt.Println(ok) }
  13. 13. 3. Deadlock Go has good support for deadlock detection at runtime. In a situation where no goroutine is able to make progress, a Go program will often provide a detailed error message. Here is the output from our broken program: Hello! fatal error: all goroutines are asleep - deadlock! goroutine 1 [chan receive]: main.main() /tmp/sandbox395724642/main.go:15 +0x160
  14. 14. 4. Data race A data race occurs when two threads access the same variable concurrently and at least one of the accesses is a write. A deadlock may sound bad, but the truly disastrous errors that come with concurrent programming are data races. They are quite common and can be very hard to debug.
  15. 15. Sample 1 (https://play.golang.org/p/BtZJSaVyXE) This function has a data race and it’s behavior is undefined. The two goroutines, g1 and g2, participate in a race and there is no way to know in which order the operations will take place func main() { wait := make(chan struct{}) n := 0 go func() { n++ // one access: read, increment, write close(wait) }() n++ // another conflicting access <-wait fmt.Println(n) // Output: UNSPECIFIED }
  16. 16. 4. Data race The only way to avoid data races is to synchronize access to all mutable data that is shared between threads. There are several ways to achieve this. In Go, you would normally use a channel or a lock.
  17. 17. Sample 2 (https://play.golang.org/p/ubuq2zwm8G) The preferred way to handle concurrent data access in Go is to use a channel to pass the actual data from one goroutine to the next func main() { ch := make(chan int) go func() { n := 0 // A local variable is only visible to one goroutine. n++ ch <- n // The data leaves one goroutine... }() n := <-ch // ...and arrives safely in another goroutine. n++ fmt.Println(n) // Output: 2 }
  18. 18. 4. Mutual exclusion lock Sometimes it’s more convenient to synchronize data access by explicit locking instead of using channels. The Go standard library offers a mutual exclusion lock, sync.Mutex, for this purpose. For this type of locking to work, it’s crucial that all accesses to the shared data, both reads and writes, are performed only when a goroutine holds the lock. One mistake by a single goroutine is enough to break the program and introduce a data race.
  19. 19. Sample 1 (https://play.golang.org/p/Ds6XQQ9T46) In this sample we build a safe and easy-to-use concurrent data structure, AtomicInt, that stores a single integer. Any number of goroutines can safely access this number through the Add and Value methods.
  20. 20. 5. Detecting data races Go (starting with version 1.1) has a powerful data race detector. The tool is simple to use: just add the -race flag to the go command. Running the program above with the detector turned on gives the following clear and informative output.
  21. 21. 6. Select statement The select statement is the final tool in Go’s concurrency toolkit. It chooses which of a set of possible communications will proceed. If any of the communications can proceed, one of them is randomly chosen and the corresponding statements are executed. Otherwise, if there is no default case, the statement blocks until one of the communications can complete.
  22. 22. 6. Select statement The select statement is the final tool in Go’s concurrency toolkit. It chooses which of a set of possible communications will proceed. If any of the communications can proceed, one of them is randomly chosen and the corresponding statements are executed. Otherwise, if there is no default case, the statement blocks until one of the communications can complete.
  23. 23. 7. Parallel computation One application of concurrency is to divide a large computation into work units that can be scheduled for simultaneous computation on separate CPUs. Rules of thumb : ● Each work unit should take about 100μs to 1ms to compute. If the units are too small, the administrative overhead of dividing the problem and scheduling sub-problems might be too large. If the units are too big, the whole computation may have to wait for a single slow work item to finish. This slowdown can happen for many reasons, such as scheduling, interrupts from other processes, and unfortunate memory layout. (Note that the number of work units is independent of the number of CPUs.) ● Try to minimize the amount of data sharing. Concurrent writes can be very costly, particularly so if goroutines execute on separate CPUs. Sharing data for reading is often much less of a problem. ● Strive for good locality when accessing data. If data can be kept in cache memory, data loading and storing will be dramatically faster. Once again, this is particularly important for writing.
  24. 24. 7. Parallel computation (Cont’d) With Go 1.* you may need to tell the runtime how many goroutines you want executing code simultaneously. func init() { numcpu := runtime.NumCPU() runtime.GOMAXPROCS(numcpu) // Try to use all available CPUs. }

×