{"id":602,"date":"2022-05-03T13:34:00","date_gmt":"2022-05-03T13:34:00","guid":{"rendered":"https:\/\/andrejacobs.org\/?p=602"},"modified":"2022-05-27T15:56:01","modified_gmt":"2022-05-27T15:56:01","slug":"learning-go-day-10","status":"publish","type":"post","link":"https:\/\/andrejacobs.org\/study-notes\/learning-go-day-10\/","title":{"rendered":"Learning Go – Day 10"},"content":{"rendered":"\n
Featured image by Egon Elbre<\/a><\/p>\n\n\n\n <\/p>\n Featured image by Egon Elbre […]<\/p>\nCustom Errors<\/h3>\n
\n
errors<\/code> package to create them.<\/li>\n<\/ul>\n
import "errors"\n\nvar (\n\tErrNotANumber = errors.New("Classic NaN issue")\n\tErrFailedToOpen = errors.New("Door is shut")\n)\n\/\/ Convention is to prefix with Err (or err if private)\n<\/code><\/pre>\n
\n
err<\/code> for the variable name receiving a potentially returned error.<\/li>\n
fmt.Fprintln<\/code> to output an error to
STDERR<\/code>.<\/li>\n<\/ul>\n
fmt.Fprintln(os.Stderr, err)\n<\/code><\/pre>\n
\n
fmt.Errorf<\/code> to create new formatted errors which may also contain an existing error to be \u201cunwrapped\u201d with the
%w<\/code> specifier..<\/li>\n<\/ul>\n
func something() error {\n\treturn fmt.Errorf("Encountered error %w because of %d", ErrFailedToOpen, 42)\n}\n\nerr := something()\nif err != nil {\n\tfmt.Fprintln(os.Stderr, err)\n}\n<\/code><\/pre>\n
CSV file support<\/h3>\n
\n
import "encoding\/csv"\n\nf, err := os.Open(fname)\n\/\/ handle err and defer closing of file\n\ncsvReader := csv.NewReader(f) \/\/ takes an io.Reader\n\/\/ Read all records into memory as [][]string\ndata, err := csvReader.ReadAll()\n\/\/ there is also Read to read a single row at a time\n<\/code><\/pre>\n
\n
writer := csv.NewWriter(os.Stdout)\nwriter.WriteAll(data) \/\/ data = [][]string\n\n\/\/ or\nwriter.Write(row) \/\/ row = []string\n<\/code><\/pre>\n
Benchmarking<\/h3>\n
\n
time<\/code> command to time how long an operation took.\nNOTE: However the format of the Zsh built-in is different to the output of Bash. This can be changed by specifying the TIMEFMT environment variable. See this SO<\/a> post.<\/li>\n<\/ul>\n
$ TIMEFMT=$'real\\t%E\\nuser\\t%U\\nsys\\t%S'\n$ time sleep 2\nreal 2.01s\nuser 0.00s\nsys 0.00s\n<\/code><\/pre>\n
\n
func someBenchmark(b *testing.B) {\n\/\/ prepare some inputs etc.\n\n\tb.ResetTimer() \/\/ This ensure the setup time above is not part of the timings\n\n\tfor i := 0; i < b.N; i++ {\n\t\t\/\/ code to be benchmarked\n\t}\n}\n<\/code><\/pre>\n
\n
go test<\/code> and with the
-bench regexp<\/code> argument. It is good to ensure no unit-tests are run by using
-run ^$<\/code> (so that no unit-test match the regexp).<\/li>\n<\/ul>\n
$ go test -bench . -run ^$\n<\/code><\/pre>\n
\n
-benchtime=<\/code> argument to adjust this.<\/li>\n<\/ul>\n
$ go test -bench . -benchtime=20x -run ^$ | tee results.txt\n<\/code><\/pre>\n
Profiling<\/h3>\n
\n
$ go test -bench . -benchtime=10x -run ^$ -cpuprofile cpu00.pprof\n\n$ go tool pprof cpu00.pprof\nType: cpu\nTime: May 3, 2022 at 7:56pm (BST)\nDuration: 3.63s, Total samples = 3.67s (101.09%)\nEntering interactive mode (type "help" for commands, "o" for options)\n(pprof)\n\n# enter 'top' to see where most of the time is spend\n(pprof) top\nShowing nodes accounting for 3.59s, 97.82% of 3.67s total\nDropped 29 nodes (cum <= 0.02s)\nShowing top 10 nodes out of 82\n flat flat% sum% cum cum%\n 2.99s 81.47% 81.47% 2.99s 81.47% syscall.syscall\n 0.21s 5.72% 87.19% 0.21s 5.72% runtime.madvise\n 0.12s 3.27% 90.46% 0.12s 3.27% runtime.pthread_cond_wait\n...\n\n# Use 'top -cum' to see cummalitive time being spent\n(pprof) top -cum\n\n# Use 'list someFunction' to see source code listing and time spent on which lines.\n(pprof) list myPackage.thisTakesTooLong\n<\/code><\/pre>\n
\n
$ go test -bench . -benchtime=10x -run ^$ -memprofile mem00.pprof\n\n$ go tool pprof -alloc_space mem00.pprof\n(pprof) top -cum\n...\n2.02GB 40.36% 75.11% 3.21GB 64.18% encoding\/csv.(*Reader).ReadAll\n<\/code><\/pre>\n
\n
$ go test -bench . -benchtime=10x -run ^$ -benchmem | tee benchresults00m.txt\n...\nsomeBenchmark-8 10 311644288 ns\/op 495584000 B\/op 5041044 allocs\/op\n<\/code><\/pre>\n
\n
web<\/code> at the pprof prompt. However you will need to have graphviz installed. (
brew install graphviz<\/code>).<\/li>\n
# To install it\n$ go install golang.org\/x\/perf\/cmd\/benchstat@latest\n\n# To compare two benchmarks\n$ ~\/go\/bin\/benchstat results00.txt results01.txt\n<\/code><\/pre>\n
Tracing<\/h3>\n
\n
$ go test -bench . -benchtime=10x -run ^$ -trace trace01.out\n\n$ go tool trace trace01.out\n\n2022\/05\/03 20:35:40 Parsing trace...\n2022\/05\/03 20:35:41 Splitting trace...\n2022\/05\/03 20:35:41 Opening browser. Trace viewer is listening on http:\/\/127.0.0.1:51549\n<\/code><\/pre>\n
Miscellaneous<\/h3>\n
\n
strconv<\/code> for function to help convert between strings and data types. E.g. strconv.ParseFloat().<\/li>\n
io.Discard<\/code> is an
io.Writer<\/code> that can be used for when you don\u2019t care about the write operations. For example instead of passing io.Stdout to a function you can let it discard the output while unit-testing.<\/li>\n
iotest.TimeoutReader<\/code> can be used to simulate a timeout error while unit-testing.<\/li>\n
isDone := make(chan struct{})\n\n\/\/ in some go routine\nclose(isDone)\n\n\/\/ listen for signal\nfor {\n\tswitch {\n\tcase <- isDone:\n\t\/\/ received signal\n\t}\n}\n<\/code><\/pre>\n
\n
runtime.NumCPU()<\/code>.<\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"