Technique for developing highly reliable servers on Go

From time to time, web programmers are faced with tasks capable of catching fear even on professionals. We are talking about the development of server applications that have no margin for error, about projects in which the cost of failure is extremely high. The author of the material, the translation of which we are publishing today, will talk about how to approach the solution of such problems.



What level of reliability does your project need?


Before delving into the details of developing highly reliable server applications, you should ask yourself whether your project really needs the most attainable level of reliability. The process of developing systems designed for work scenarios in which an error is akin to a universal catastrophe may prove to be unreasonably difficult for most projects in which the consequences of possible errors are not particularly terrible.

If the cost of the error does not turn out to be extremely high, an approach is acceptable, in the implementation of which the developer makes the most reasonable efforts to ensure that the project is working, and if problems arise, he simply deals with them. Modern monitoring tools and continuous software deployment workflows allow you to quickly identify problems in production and fix them almost instantly. In many cases this is quite enough.

In the project I'm working on today, this is not so. We are talking about the blockchain implementation - a distributed server infrastructure for the safe execution of code in an environment with a low level of trust in reaching consensus. One of the applications of this technology is digital currencies. This is a classic example of an extremely high cost error system. In this case, the project developers really need to make it very, very reliable.

However, in some other projects, even if they are not related to finance, the desire for the highest reliability of the code makes sense. The cost of servicing an often breaking code base can very quickly reach astronomical magnitudes. The ability to identify problems at the early stages of the development process, when the cost of eliminating them is still low, looks like a very real reward for the timely investment of time and effort in the methodology for developing highly reliable systems.

Perhaps the solution is TDD?


Development through testing ( Test Driven Development , TDD) is often considered the best cure for a bad code. TDD is a puristic development methodology that, when applied, first writes tests, and only then the code that is added to the project only when the tests that check it stop failing. This process guarantees 100% coverage of the code with tests and often gives the illusion that the code has been tested in all possible uses.

However, it is not. TDD is an excellent methodology that works well in some areas, but, to develop a truly reliable code, it is not enough. Worse, TDD inspires the developer with false confidence and the use of this methodology can lead to the fact that he simply will not, out of laziness, write tests to check the system for failures in situations whose appearance, from the point of view of common sense, is almost impossible. We will talk about it later.

Tests - the key to reliability


In fact, it doesn't matter if you create tests before writing code, or after you use a development methodology like TDD or not. The main thing is the fact of the presence of tests. Tests are the best defensive fortification that protects your code from production problems.

Since we are going to run our tests very often, ideally, after adding each new line to the code, it is necessary that the tests be automated. Our confidence in the quality of the code should in no way be based on its manual checks. The thing is that people tend to make mistakes. A person's attention to detail weakens after he has completed the same, monotonous task many times in a row.

Tests should be quick. Very fast.

If the execution of a test suite takes more than a few seconds, the developers will most likely start lazing, will add code to the project without testing it. Speed ​​is one of Go's strengths. A set of development tools in this language is one of the fastest among existing ones. Compilation, rebuilding and testing projects are done in seconds.

Tests are also one of the major driving forces of open-source projects. For example, this applies to everything related to blockchain technologies. Open-source here is almost a religion. The code base in order to gain confidence in it from those who will use it, must be open. This allows, for example, to conduct its audit, it creates an atmosphere of decentralization, in which there are no certain entities that control the project.

It makes no sense to wait for a significant contribution to the open source project from external developers, if this project does not include quality tests. External project participants need mechanisms to quickly verify the compatibility of what they wrote with what has already been added to the project. The entire set of tests, in fact, should be performed automatically upon receipt of each request to add a new code to the project. If something that is supposed to be added to the project through such a request breaks something, the test should immediately inform about it.

Full coverage of the code base with tests is a deceptive, but important metric. The goal of achieving 100% coverage of the code with tests may seem excessive, but if you think about it, it turns out that, with incomplete coverage of the code with tests, some of the code is sent to production unchecked, never before executed.

Full coverage of the code with tests does not necessarily mean that there are enough tests in the project, and does not mean that these are tests that provide absolutely all options for using the code. We can only say with confidence that if a project is not 100% covered by tests, the developer cannot be sure that the code is absolutely reliable, since some parts of the code are never tested.

Notwithstanding the foregoing, there are situations where there are too many tests. Ideally, every possible error should lead to the failure of one test. If the number of tests is redundant, that is, different tests check the same code fragments, then modifying the existing code and changing the existing system behavior will result in the existing tests matching the new code having to spend too much time for their processing. .

Why Go is an excellent choice for highly reliable projects?


Go is a static-typed language. Types are a contract between different pieces of code that run together. Without automatic type checking during the project build process, if you need to adhere to strict rules for covering code with tests, we would have to implement tests that check these “contracts” on our own. So, for example, occurs in server and client projects based on JavaScript. Writing complex tests, aimed only at checking types, means a lot of extra work, which, in the case of Go, can be avoided.

Go is a simple and dogmatic language. As you know, Go includes a lot of traditional programming language ideas, such as classic OOP inheritance. Difficulty is the worst enemy of a trusted code. Problems tend to hide at the joints of compounds of complex structures. This is expressed in the fact that although it is easy to test typical ways of using a certain construction, there are bizarre borderline cases that the test developer may not even think about. The project, as a result, will tumble down just one of such cases. In this sense, dogmatism is also useful. In Go, there is often only one way to do something. This may seem to be a deterrent to the free spirit of the programmer, but when something can be done in only one way, it is difficult to do something wrong.

Go is concise, but expressive. Readable code is easier to analyze and audit. If the code is too verbose, its main purpose may sink into the “noise” of auxiliary constructions. If the code is too concise, the programs on it may be difficult to read and understand. Go maintains a balance between conciseness and expressiveness. For example, it doesn’t have as many supporting constructs as in languages ​​like Java or C ++. At the same time, Go constructions, for example, in such areas as error handling, are very clear and sufficiently detailed, which simplifies the work of the programmer, helping him to make sure, for example, that he has checked everything that is possible.

Go has clear mechanisms for handling errors and recovering programs from crashes. Well-established run-time error handling mechanisms are the cornerstone of highly reliable code. Go has strict rules for return and error propagation. In environments like Node.js, mixing approaches to managing the flow of a program, such as callbacks, promises, and asynchronous functions, often leads to unprocessed errors, such as unprocessed promis deviations . Restoring the program after such events is almost impossible .

Go has an extensive standard library. Dependencies are a risk, especially when they come from projects that do not pay enough attention to the reliability of the code. The server application that goes into production contains all dependencies. At the same time, if something goes wrong, the developer of the finished application will be responsible for this, and not the one who created one of the libraries he uses. As a result, in environments, projects written for which are filled with small dependencies, it is more difficult to create reliable applications.

Dependencies are also a security risk, since the level of a project’s vulnerability corresponds to the level of vulnerability of its most unsafe dependency . The extensive standard Go library is maintained by its developers in very good condition, its existence reduces the need for external dependencies.

High speed development. The main attractive feature of environments like Node.js is an extremely short development cycle. Writing code takes less time, as a result, the programmer becomes more productive.

Go is also characterized by high development speed. A set of tools for assembling projects is fast enough so that you can instantly look at the code in action. Compile time is extremely short, and as a result, the code run on Go is perceived as if it is not compiled, but interpreted. At the same time, the language has enough abstractions, like a garbage collection system, which allows developers to direct their efforts to implement the functionality of their project, and not to solve auxiliary tasks.

Practical experiment


Now that we have voiced enough general provisions, it's time to look at the code. We need an example that is simple enough so that we, while studying it, could focus on the development methodology, but at the same time, it must be sufficiently advanced so that we can have something to talk about by examining it. I decided that it would be easiest to take something from what I do every day. Therefore, I propose to parse the creation of a server that handles something resembling financial transactions. Users of this server will be able to check the balance of accounts associated with their accounts. In addition, they will be able to transfer funds from one account to another.

We will try not to complicate this example. Our system will have one server. We will not communicate with authentication and cryptography systems. These are integral parts of work projects. We need to focus on the core of such a project, to show how to make it as reliable as possible.

▍Disposing a complex project into parts that are convenient to manage


Complexity is the worst enemy of reliability. One of the best approaches for working with complex systems is to apply the long-known principle of "divide and conquer." The task should be divided into small subtasks and solve each of them separately. Which way to approach the division of our task? We will follow the principle of shared responsibility . Each part of our project should have its own area of ​​responsibility.

This idea fits in perfectly with the popular microservice architecture. Our server will consist of separate services. Each service will have a well-defined area of ​​responsibility and a well-defined interface for interacting with other services.

After we structure the server in this way, we will be able to make decisions about how each of the services should work. All services can be performed together, in the same process, from each of them you can make a separate server and adjust their interaction using RPC, you can divide the services and run each of them on a separate computer.

We will not over-complicate the task, we will choose the simplest option. Namely, all services will be executed in the same process, they will exchange information directly, like libraries. If necessary, in the future this architectural solution can be easily reviewed and modified.

So what services do we need? Our server is perhaps too simple to divide it into parts, but, for educational purposes, we will, nevertheless, divide it. We need to respond to HTTP client requests to check balances and complete transactions. One of the services can work with an HTTP interface for clients. PublicApi call it PublicApi . Another service will have information about the state of the system - balance log. StateStorage call it StateStorage . The third service will combine the two described above and implement the logic of “contracts” aimed at changing balance sheets. The task of the third service will be the execution of contracts. VirtualMachine call it VirtualMachine .


Application Server Architecture

Place the code for these services in the project /services/publicapi , /services/virtualmachine and /services/statestorage .

▍Clear definition of the limits of service liability.


During the implementation of services, we want to be able to work with each of them separately. It is even possible to divide the development of these services among different programmers. Since services are interdependent, and we want to parallelize their development, we need to start working with a clear definition of the interfaces that they use to interact with each other. Using these interfaces, we will be able to autonomously test services, preparing stubs for everything that is outside of each of them.

How to describe the interface? One of the options is to document everything, but the documentation has a tendency to become obsolete; discrepancies begin to accumulate between the documentation and the code during the work on the project. In addition, we can use Go interface declarations. This is an interesting option, but it is better to describe the interface so that this description does not depend on a particular programming language. It will be useful to us in a very real situation if, in the process of working on a project, some of its component services will be decided to be implemented in other languages, the capabilities of which are better suited for solving their problems.

One of the options for describing interfaces is to use protobuf . This is a simple protocol for describing messages and service endpoints, developed by Google and independent of the language.

Let's start with the interface for the StateStorage service. The state of the application will be presented in the form of a key-value type structure. Here is the code for the statestorage.proto file:

 syntax = "proto3"; package statestorage; service StateStorage { rpc WriteKey (WriteKeyInput) returns (WriteKeyOutput); rpc ReadKey (ReadKeyInput) returns (ReadKeyOutput); } message WriteKeyInput { string key = 1; int32 value = 2; } message WriteKeyOutput { } message ReadKeyInput { string key = 1; } message ReadKeyOutput { int32 value = 1; } 

Although clients work with the PublicApi service via HTTP, it also does not interfere with the clear interface described by the same means as above (the publicapi.proto file):

 syntax = "proto3"; package publicapi; import "protocol/transactions.proto"; service PublicApi { rpc Transfer (TransferInput) returns (TransferOutput); rpc GetBalance (GetBalanceInput) returns (GetBalanceOutput); } message TransferInput { protocol.Transaction transaction = 1; } message TransferOutput { string success = 1; int32 result = 2; } message GetBalanceInput { protocol.Address from = 1; } message GetBalanceOutput { string success = 1; int32 result = 2; } 

Now we need to describe the Transaction and Address data structures (the transactions.proto file):

 syntax = "proto3"; package protocol; message Address { string username = 1; } message Transaction { Address from = 1; Address to = 2; int32 amount = 3; } 

In the draft, proto-descriptions for services are placed in the /types/services folder, and descriptions of general-purpose data structures are in the /types/protocol folder.

After the interface descriptions are ready, they can be compiled into Go code.

The advantages of this approach are that the code that does not correspond to the interface description will simply not be in the compilation results. Using alternative methods would require us to write special tests to verify that the code matches the interface descriptions.

Full definitions, generated Go files and compilation instructions can be found here . This is possible thanks to Square Engineering and their goprotowrap design .

Please note that in our project the RPC transport layer is not implemented and the data exchange between the services looks like normal library calls. When we are ready to spread services across different servers, we can add a transport layer like gRPC to the system.

▍Types of tests used in the project


Since tests are the key to highly reliable code, I suggest first to talk about what tests for our project we will write.

Unit Tests


Unit tests are the basis of the testing pyramid . We will test each module in isolation. What is a module? In Go, we can take modules as separate files in a package. For example, if we have the file /services/publicapi/handlers.go , then we will place the unit test for it in the same package at /services/publicapi/handlers_test.go .

It is best to place the unit tests in the same package as the code under test, which will allow the tests to have access to unexported variables and functions.

Service tests


The following type of tests is known by various names. These are the so-called service, integration, or component tests. Their essence is to take several modules and test their joint work. These tests are at the level above the modular in the pyramid of testing. In our case, we will use integration tests to verify the entire service. These tests determine the specifications for the service. For example, tests for the StateStorage service will be placed in the /services/statestorage/spec folder.

It is best to place these tests in a package that is different from the one in which the code under test is located so that the capabilities of this code can only be accessed via the exported interfaces.

Through tests


These tests are at the top of the testing pyramid, they are used to check the entire system and all its services. Such tests describe the end-to-end (e2e) specification for the system, so we will place them in the /e2e/spec folder.

End-to-end tests, as well as service tests, should be placed in a package other than the one in which the code under test is located in order to work with the system only through exported interfaces.

What tests should I write first? Start with the foundation of the "pyramid" and move up? Or start from the top and go down? Any of these approaches has the right to life. The advantages of the top-down approach are to create a specification for the entire system in the first place. It is usually easiest at the very beginning of work to talk about the features of the system as a whole. Even if we divide the system into separate services incorrectly, the system specifications will remain unchanged. This, moreover, will help us to understand that something, at a lower level, is done incorrectly.

The disadvantage of the top-down approach is that the end-to-end tests are those that are used after all the others, when the entire system being developed is created. This means that they will give errors for a long time. We, when writing tests for our project, will use exactly this approach.

Test Development


Pass-through test development


Before creating tests, we need to decide whether we will write them without the use of aids or use some kind of framework. Relying on a framework, using it as a development dependency, is less dangerous than relying on a framework in code that gets into production. In our case, since the Go standard library does not have decent support for BDD , and this format is great for describing specifications, we will choose the option of work involving the use of the framework.

There are many great frameworks that give us what we need. Among them are GoConvey and Ginkgo .

Personally, I like to use a combination of Ginkgo and Gomega (terrible names, but what to do), which use syntactic constructions like Describe() and It() .

What will our tests look like? For example, here’s a test for the user’s balance check mechanism ( sanity.go file):

 package spec import ... var _ = Describe("Sanity", func() { var ( node services.Node ) BeforeEach(func() { node = services.NewNode() node.Start() }) AfterEach(func() { node.Stop() }) It("should show balances with GET /api/balance", func() { resp, err := http.Get("http://localhost:8080/api/balance?from=user1") Expect(err).ToNot(HaveOccurred()) Expect(resp.StatusCode).To(Equal(http.StatusOK)) Expect(ResponseBodyAsString(resp)).To(Equal("0")) }) }) 

Since the server is accessible from the outside world via HTTP, we will work with its web API using http.Get . What about transactional testing? Here is the code for the corresponding test:

 It("should transfer funds with POST /api/transfer", func() { resp, err := http.Get("http://localhost:8080/api/transfer?from=user1&to=user2&amount=17") Expect(err).ToNot(HaveOccurred()) Expect(resp.StatusCode).To(Equal(http.StatusOK)) Expect(ResponseBodyAsString(resp)).To(Equal("-17")) resp, err = http.Post("http://localhost:8080/api/balance?from=user2", "text/plain", nil) Expect(err).ToNot(HaveOccurred()) Expect(resp.StatusCode).To(Equal(http.StatusOK)) Expect(ResponseBodyAsString(resp)).To(Equal("17")) }) 

The test code perfectly describes their essence, it can even replace the documentation. As you can see, we admit the existence of negative balances of user accounts. This is a feature of our project. If this were prohibited, this decision would be reflected in the test.

Here is the full test code

Development of service tests


Now, after the development of end-to-end tests, we descend the testing pyramid and proceed to the creation of service tests. Such tests are developed for each individual service. Choose a service that has a dependency on another service, since this case is more interesting than developing tests for an independent service.

Let's start with the service VirtualMachine . Here you can find an interface with proto-descriptions for this service. Since the VirtualMachine service relies on the StateStorage service and makes calls to it, we will need to create a mock object for the StateStorage service in order to test the VirtualMachine service in isolation. The stub object will allow us to manage StateStorage responses during testing.

How to implement a stub object in Go? This can be done exclusively by means of the language, without additional tools, or you can resort to the appropriate library, which, in addition, will enable you to work with the statements in the testing process. For this purpose, I prefer to use the go-mock library.

Place the code stub in the file /services/statestorage/mock.go . It is best to place stub objects in the same place as the entities that they imitate in order to give them access to unexported variables and functions. The stub at this stage is a schematic implementation of the service, but as the service develops, we may need to develop and implement the stub. Here is the stub object code ( mock.go file):

 package statestorage import ... type MockService struct { mock.Mock } func (s *MockService) Start() { s.Called() } func (s *MockService) Stop() { s.Called() } func (s *MockService) IsStarted() bool { return s.Called().Bool(0) } func (s *MockService) WriteKey(input *statestorage.WriteKeyInput) (*statestorage.WriteKeyOutput, error) { ret := s.Called(input) return ret.Get(0).(*statestorage.WriteKeyOutput), ret.Error(1) } func (s *MockService) ReadKey(input *statestorage.ReadKeyInput) (*statestorage.ReadKeyOutput, error) { ret := s.Called(input) return ret.Get(0).(*statestorage.ReadKeyOutput), ret.Error(1) } 

If you give the development of individual services to different programmers, it makes sense to first create a stub and transfer them to the team.

Let's return to the development of a service test for VirtualMachine . What script need to check here? It is best to focus on the service interface and develop tests for each endpoint. We implement a test for the CallContract() endpoint with an argument representing the "GetBalance" method. Here is the corresponding code ( contracts.go file):

 package spec import ... var _ = Describe("Contracts", func() { var ( service uut.Service stateStorage *_statestorage.MockService ) BeforeEach(func() { service = uut.NewService() stateStorage = &_statestorage.MockService{} service.Start(stateStorage) }) AfterEach(func() { service.Stop() }) It("should support 'GetBalance' contract method", func() { stateStorage.When("ReadKey", &statestorage.ReadKeyInput{Key: "user1"}).Return(&statestorage.ReadKeyOutput{Value: 100}, nil).Times(1) addr := protocol.Address{Username: "user1"} out, err := service.CallContract(&virtualmachine.CallContractInput{Method: "GetBalance", Arg: &addr}) Expect(err).ToNot(HaveOccurred()) Expect(out.Result).To(BeEquivalentTo(100)) Expect(stateStorage).To(ExecuteAsPlanned()) }) }) 

Notice that the service we are testing, VirtualMachine , gets a pointer to its dependency, StateStorage , in the Start() method through a simple dependency injection mechanism. It is here that we pass an instance of a stub object. Also note the stateStorage.When("ReadKey", &statestorage.ReadKeyInput{Key… , where we tell the stub object how it should behave when it is accessed. When the ReadKey method is ReadKey , it should return 100. Then we, in the Expect(stateStorage).To(ExecuteAsPlanned()) , check that this command is called exactly once.

Such tests become specifications for the service. A complete set of tests for the VirtualMachine service can be found here . Test suites for other services of our project can be found here and here .

Development of unit tests


Perhaps the implementation of the contract for the "GetBalance" method is too simple, so "GetBalance" talk about the implementation of the somewhat more complicated method "Transfer" . The contract for transferring funds from the account to the account submitted by this method needs to read the data on the sender and payee balances, calculate the new balance sheets and record what happened in the application state. The service test for all this is very similar to the one we just implemented (the transactions.go file):

 It("should support 'Transfer' transaction method", func() { stateStorage.When("ReadKey", &statestorage.ReadKeyInput{Key: "user1"}).Return(&statestorage.ReadKeyOutput{Value: 100}, nil).Times(1) stateStorage.When("ReadKey", &statestorage.ReadKeyInput{Key: "user2"}).Return(&statestorage.ReadKeyOutput{Value: 50}, nil).Times(1) stateStorage.When("WriteKey", &statestorage.WriteKeyInput{Key: "user1", Value: 90}).Return(&statestorage.WriteKeyOutput{}, nil).Times(1) stateStorage.When("WriteKey", &statestorage.WriteKeyInput{Key: "user2", Value: 60}).Return(&statestorage.WriteKeyOutput{}, nil).Times(1) t := protocol.Transaction{From: &protocol.Address{Username: "user1"}, To: &protocol.Address{Username: "user2"}, Amount: 10} out, err := service.ProcessTransaction(&virtualmachine.ProcessTransactionInput{Method: "Transfer", Arg: &t}) Expect(err).ToNot(HaveOccurred()) Expect(out.Result).To(BeEquivalentTo(90)) Expect(stateStorage).To(ExecuteAsPlanned()) }) 

In the process of working on a project, we finally reach the creation of its internal mechanisms and create a module that is placed in the file processor.go , which contains the implementation of the contract. Here is what its original version will look like ( processor.go file):

 package virtualmachine import ... func (s *service) processTransfer(fromUsername string, toUsername string, amount int32) (int32, error) { fromBalance, err := s.stateStorage.ReadKey(&statestorage.ReadKeyInput{Key: fromUsername}) if err != nil { return 0, err } toBalance, err := s.stateStorage.ReadKey(&statestorage.ReadKeyInput{Key: toUsername}) if err != nil { return 0, err } _, err = s.stateStorage.WriteKey(&statestorage.WriteKeyInput{Key: fromUsername, Value: fromBalance.Value - amount}) if err != nil { return 0, err } _, err = s.stateStorage.WriteKey(&statestorage.WriteKeyInput{Key: toUsername, Value: toBalance.Value + amount}) if err != nil { return 0, err } return fromBalance.Value - amount, nil } 

This construction satisfies the service test, but in our case the integration test contains only the verification of the basic scenario. What about border cases and potential failures? As you can see, any of the calls we make to StateStorage may fail. If 100% code coverage with tests is required, we need to check all these situations. The unit test is great for implementing such checks.

Since we are going to call the function several times with different inputs and simulate parameters to reach all the branches of the code, we can resort to table-based tests to make this process more efficient. In Go, it is customary to avoid exotic frameworks in unit tests. We may refuse Ginkgo , but probably we should leave Gomega . As a result, the checks performed here will be similar to the ones we performed in previous tests. Here is the test code ( processor_test.go file):

 package virtualmachine import ... var transferTable = []struct{ to string //  ,    read1Err error //       read2Err error //       write1Err error //       write2Err error //       output int32 //   errs bool //        }{ {"user2", errors.New("a"), nil, nil, nil, 0, true}, {"user2", nil, errors.New("a"), nil, nil, 0, true}, {"user2", nil, nil, errors.New("a"), nil, 0, true}, {"user2", nil, nil, nil, errors.New("a"), 0, true}, {"user2", nil, nil, nil, nil, 90, false}, } func TestTransfer(t *testing.T) { Ω := NewGomegaWithT(t) for _, tt := range transferTable { s := NewService() ss := &_statestorage.MockService{} s.Start(ss) ss.When("ReadKey", &statestorage.ReadKeyInput{Key: "user1"}).Return(&statestorage.ReadKeyOutput{Value: 100}, tt.read1Err) ss.When("ReadKey", &statestorage.ReadKeyInput{Key: "user2"}).Return(&statestorage.ReadKeyOutput{Value: 50}, tt.read2Err) ss.When("WriteKey", &statestorage.WriteKeyInput{Key: "user1", Value: 90}).Return(&statestorage.WriteKeyOutput{}, tt.write1Err) ss.When("WriteKey", &statestorage.WriteKeyInput{Key: "user2", Value: 60}).Return(&statestorage.WriteKeyOutput{}, tt.write2Err) output, err := s.(*service).processTransfer("user1", tt.to, 10) if tt.errs { Ω.Expect(err).To(HaveOccurred()) } else { Ω.Expect(err).ToNot(HaveOccurred()) Ω.Expect(output).To(BeEquivalentTo(tt.output)) } } } 

«Ω» — , — ( Gomega ). .

, TDD, , , . processTransfer() .

VirtualMachine . .

100% . , . .

, ? Unfortunately not. , , , .

▍ -


. ? HTTP- Go (goroutine). , — , . , , , .

- . , , , , . - /e2e/stress . - ( stress.go ):

 package stress import ... const NUM_TRANSACTIONS = 20000 const NUM_USERS = 100 const TRANSACTIONS_PER_BATCH = 200 const BATCHES_PER_SEC = 40 var _ = Describe("Transaction Stress Test", func() { var ( node services.Node ) BeforeEach(func() { node = services.NewNode() node.Start() }) AfterEach(func() { node.Stop() }) It("should handle lots and lots of transactions", func() { //  HTTP-     transport := http.Transport{ IdleConnTimeout: time.Second*20, MaxIdleConns: TRANSACTIONS_PER_BATCH*10, MaxIdleConnsPerHost: TRANSACTIONS_PER_BATCH*10, } client := &http.Client{Transport: &transport} //      ledger := map[string]int32{} for i := 0; i < NUM_USERS; i++ { ledger[fmt.Sprintf("user%d", i+1)] = 0 } //     HTTP   rand.Seed(42) done := make(chan error, TRANSACTIONS_PER_BATCH) for i := 0; i < NUM_TRANSACTIONS / TRANSACTIONS_PER_BATCH; i++ { log.Printf("Sending %d transactions... (batch %d out of %d)", TRANSACTIONS_PER_BATCH, i+1, NUM_TRANSACTIONS / TRANSACTIONS_PER_BATCH) time.Sleep(time.Second / BATCHES_PER_SEC) for j := 0; j < TRANSACTIONS_PER_BATCH; j++ { from := randomizeUser() to := randomizeUser() amount := randomizeAmount() ledger[from] -= amount ledger[to] += amount go sendTransaction(client, from, to, amount, &done) } for j := 0; j < TRANSACTIONS_PER_BATCH; j++ { err := <- done Expect(err).ToNot(HaveOccurred()) } } //   for i := 0; i < NUM_USERS; i++ { user := fmt.Sprintf("user%d", i+1) resp, err := client.Get(fmt.Sprintf("http://localhost:8080/api/balance?from=%s", user)) Expect(err).ToNot(HaveOccurred()) Expect(resp.StatusCode).To(Equal(http.StatusOK)) Expect(ResponseBodyAsString(resp)).To(Equal(fmt.Sprintf("%d", ledger[user]))) } }) }) func randomizeUser() string { return fmt.Sprintf("user%d", rand.Intn(NUM_USERS)+1) } func randomizeAmount() int32 { return rand.Int31n(1000)+1 } func sendTransaction(client *http.Client, from string, to string, amount int32, done *chan error) { url := fmt.Sprintf("http://localhost:8080/api/transfer?from=%s&to=%s&amount=%d", from, to, amount) resp, err := client.Post(url, "text/plain", nil) if err == nil { ioutil.ReadAll(resp.Body) resp.Body.Close() } *done <- err } 

, - . ( rand.Seed(42) ) , . . , , — , .

- HTTP , TCP- ( , , ). , , 200 IdleConnection TCP- . , 100.

… :

 fatal error: concurrent map writes goroutine 539 [running]: runtime.throw(0x147bf60, 0x15) /usr/local/go/src/runtime/panic.go:616 +0x81 fp=0xc4207159d8 sp=0xc4207159b8 pc=0x102ca01 runtime.mapassign_faststr(0x13f5140, 0xc4201ca0c0, 0xc4203a8097, 0x6, 0x1012001) /usr/local/go/src/runtime/hashmap_fast.go:703 +0x3e9 fp=0xc420715a48 sp=0xc4207159d8 pc=0x100d879 services/statestorage.(*service).WriteKey(0xc42000c060, 0xc4209e6800, 0xc4206491a0, 0x0, 0x0) services/statestorage/methods.go:15 +0x10c fp=0xc420715a88 sp=0xc420715a48 pc=0x138339c services/virtualmachine.(*service).processTransfer(0xc4201ca090, 0xc4203a8097, 0x6, 0xc4203a80a1, 0x6, 0x2a4, 0xc420715b30, 0x1012928, 0x40) services/virtualmachine/processor.go:19 +0x16e fp=0xc420715ad0 sp=0xc420715a88 pc=0x13840ee services/virtualmachine.(*service).ProcessTransaction(0xc4201ca090, 0xc4209e67c0, 0x30, 0x1433660, 0x12a1d01) Ginkgo ran 1 suite in 1.288879763s Test Suite Failed 

? StateStorage ( map ), . , , . , map sync.map . .

processTransfer() . , — . , , , , . , processTransfer() . .

, . , , .

 e2e/stress/transactions.go:44 Expected <string>: -7498 to equal <string>: -7551 e2e/stress/transactions.go:82 ------------------------------ Ginkgo ran 1 suite in 5.251593179s Test Suite Failed 

, . , , ( , ). , , .

— . TDD . How is this possible? , 100%?! , — . processTransfer() , , .

. , , . .

Results


, , , -, , , ? ? — .

, -. , «» processTransfer() . , , . , — . , - . , , .

. , . , StateStorage WriteKey , , , , WriteKeys , , .

, : . « ». -, , , , , . — . , , — .

, — GitHub. . , , , , , , .

Dear readers! ?

Source: https://habr.com/ru/post/413681/


All Articles