mirror of
https://github.com/zeromicro/go-zero.git
synced 2026-05-12 01:10:00 +08:00
Compare commits
142 Commits
tools/goct
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3738be1945 | ||
|
|
5b74b9ab7b | ||
|
|
4a67261b7b | ||
|
|
22bdae0787 | ||
|
|
e8675d6a9a | ||
|
|
e441c44975 | ||
|
|
3f91a79a2b | ||
|
|
8c47c01739 | ||
|
|
f59a1cb0de | ||
|
|
d44ff6ddc8 | ||
|
|
6ffa9cabec | ||
|
|
0069721586 | ||
|
|
ba9c275853 | ||
|
|
9a6447ab5c | ||
|
|
004995f06a | ||
|
|
c12c82b2f6 | ||
|
|
85d770d340 | ||
|
|
8cd7f7a2d8 | ||
|
|
db3101361b | ||
|
|
eb2302b71e | ||
|
|
04ed637366 | ||
|
|
567087a715 | ||
|
|
4d2e64a417 | ||
|
|
b01831b4c5 | ||
|
|
d1a014955c | ||
|
|
ec802e25a6 | ||
|
|
8a2e09dfd1 | ||
|
|
220d438fe7 | ||
|
|
2cd96146fa | ||
|
|
7e96317fad | ||
|
|
70728ce2e2 | ||
|
|
6a72a735d4 | ||
|
|
b139a82c2e | ||
|
|
bdddf1f30c | ||
|
|
9b74b7e09e | ||
|
|
4d5ed2c45d | ||
|
|
a2310bf9d7 | ||
|
|
be846eba01 | ||
|
|
b20f0e3d60 | ||
|
|
e2bb65d43c | ||
|
|
94e2f5bd12 | ||
|
|
173f76acf9 | ||
|
|
6e1af75635 | ||
|
|
84ff755e61 | ||
|
|
4b9d23aef5 | ||
|
|
97b9aebe99 | ||
|
|
8e7e5695eb | ||
|
|
4b4751e76c | ||
|
|
fcec494ea8 | ||
|
|
42117c2dcc | ||
|
|
4b631f3785 | ||
|
|
f29c8612e8 | ||
|
|
35ba024103 | ||
|
|
52df1c532a | ||
|
|
39729f3756 | ||
|
|
5c9ea81db2 | ||
|
|
b284664de4 | ||
|
|
1b76885040 | ||
|
|
eef217522b | ||
|
|
6bd0d169d5 | ||
|
|
3d291328d8 | ||
|
|
858f8ca82e | ||
|
|
4ff3975c5a | ||
|
|
7b23f73268 | ||
|
|
918a7be698 | ||
|
|
0a724447cd | ||
|
|
9e425893a7 | ||
|
|
4de13b6cc8 | ||
|
|
c6f75532fa | ||
|
|
fdf4ccf057 | ||
|
|
b333ed245b | ||
|
|
8f1576df36 | ||
|
|
72dd970969 | ||
|
|
29b65e12c1 | ||
|
|
577a611dc3 | ||
|
|
75941aedd4 | ||
|
|
c7065171d7 | ||
|
|
052de3b552 | ||
|
|
866613af8c | ||
|
|
3d4f6a5e16 | ||
|
|
d1d47d02d5 | ||
|
|
d6c876860b | ||
|
|
98423ca948 | ||
|
|
4e52d77ad8 | ||
|
|
1fc2cfb859 | ||
|
|
942cdae41d | ||
|
|
e9c3607bc6 | ||
|
|
d1603e9166 | ||
|
|
e30317e9c4 | ||
|
|
568f9ce007 | ||
|
|
dcb309065a | ||
|
|
bf8e17a686 | ||
|
|
b2ebbfce62 | ||
|
|
2b10a6a223 | ||
|
|
80c320b46e | ||
|
|
bea9d150a1 | ||
|
|
3f756a2cbf | ||
|
|
bbe5bbb0c0 | ||
|
|
5ad2278a69 | ||
|
|
77763fe748 | ||
|
|
538c4fb5c7 | ||
|
|
315fb2fe0a | ||
|
|
e382887eb8 | ||
|
|
cf21cb2b0b | ||
|
|
61e8894c31 | ||
|
|
7a6c3c8129 | ||
|
|
875fec3e1a | ||
|
|
60128c2100 | ||
|
|
ce6d0e3ea7 | ||
|
|
fa85c84af3 | ||
|
|
440884105e | ||
|
|
271f10598f | ||
|
|
cf55a88ce3 | ||
|
|
c1c786b14a | ||
|
|
988fb9d9bf | ||
|
|
d212c81bca | ||
|
|
bc43df2641 | ||
|
|
351b8cb37b | ||
|
|
0d681a2e29 | ||
|
|
5ea027c5de | ||
|
|
5de6112dcd | ||
|
|
4fb51723b7 | ||
|
|
06502d1115 | ||
|
|
3854d6dd00 | ||
|
|
895854913a | ||
|
|
ef753b8857 | ||
|
|
9c16fede73 | ||
|
|
ce11adb5e4 | ||
|
|
894e8b1218 | ||
|
|
2ec7e432dd | ||
|
|
870e8352c1 | ||
|
|
de42f27e03 | ||
|
|
955b8016aa | ||
|
|
d728a3b2d9 | ||
|
|
0c205a71fc | ||
|
|
a8c0199d96 | ||
|
|
032a266ec4 | ||
|
|
40b75fbb9b | ||
|
|
afad55045b | ||
|
|
5f54f06ee5 | ||
|
|
20f56ae1d0 | ||
|
|
73d6fcfccd |
344
.github/copilot-instructions.md
vendored
Normal file
344
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
# GitHub Copilot Instructions for go-zero
|
||||||
|
|
||||||
|
This document provides guidelines for GitHub Copilot when assisting with development in the go-zero project.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
go-zero is a web and RPC framework with lots of built-in engineering practices designed to ensure the stability of busy services with resilience design. It has been serving sites with tens of millions of users for years.
|
||||||
|
|
||||||
|
### Key Architecture Components
|
||||||
|
|
||||||
|
- **REST API framework** (`rest/`) - HTTP service framework with middleware chain support
|
||||||
|
- **RPC framework** (`zrpc/`) - gRPC-based RPC framework with etcd service discovery and p2c_ewma load balancing
|
||||||
|
- **Gateway** (`gateway/`) - API gateway supporting both HTTP and gRPC upstreams with proto-based routing
|
||||||
|
- **MCP Server** (`mcp/`) - Model Context Protocol server for AI agent integration via SSE
|
||||||
|
- **Core utilities** (`core/`) - Production-grade components:
|
||||||
|
- Resilience: circuit breakers (`breaker/`), rate limiters (`limit/`), adaptive load shedding (`load/`)
|
||||||
|
- Storage: SQL with cache (`stores/sqlc/`), Redis (`stores/redis/`), MongoDB (`stores/mongo/`)
|
||||||
|
- Concurrency: MapReduce (`mr/`), worker pools (`executors/`), sync primitives (`syncx/`)
|
||||||
|
- Observability: metrics (`metric/`), tracing (`trace/`), structured logging (`logx/`)
|
||||||
|
- **Code generation tool** (`tools/goctl/`) - CLI tool for generating Go code from `.api` and `.proto` files
|
||||||
|
|
||||||
|
## Coding Standards and Conventions
|
||||||
|
|
||||||
|
### Code Style
|
||||||
|
|
||||||
|
1. **Follow Go conventions**: Use `gofmt` for formatting, follow effective Go practices
|
||||||
|
2. **Package naming**: Use lowercase, single-word package names when possible
|
||||||
|
3. **Error handling**: Always handle errors explicitly, use `errorx.BatchError` for multiple errors
|
||||||
|
4. **Context propagation**: Always pass `context.Context` as the first parameter for functions that may block
|
||||||
|
5. **Configuration structures**: Use struct tags with JSON annotations, defaults, and validation
|
||||||
|
|
||||||
|
**Pattern**: All service configs embed `service.ServiceConf` for common fields (Name, Log, Mode, Telemetry)
|
||||||
|
```go
|
||||||
|
type Config struct {
|
||||||
|
service.ServiceConf // Always embed for services
|
||||||
|
Host string `json:",default=0.0.0.0"`
|
||||||
|
Port int // Required field (no default)
|
||||||
|
Timeout int64 `json:",default=3000"` // Timeouts in milliseconds
|
||||||
|
Optional string `json:",optional"` // Optional field
|
||||||
|
Mode string `json:",default=pro,options=dev|test|rt|pre|pro"` // Validated options
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Service modes**: `dev`/`test`/`rt` disable load shedding and stats; `pre`/`pro` enable all resilience features
|
||||||
|
|
||||||
|
### Interface Design
|
||||||
|
|
||||||
|
1. **Small interfaces**: Follow Go's preference for small, focused interfaces
|
||||||
|
2. **Context methods**: Provide both context and non-context versions of methods
|
||||||
|
3. **Options pattern**: Use functional options for complex configuration
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```go
|
||||||
|
func (c *Client) Get(key string, val any) error {
|
||||||
|
return c.GetCtx(context.Background(), key, val)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) GetCtx(ctx context.Context, key string, val any) error {
|
||||||
|
// implementation
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Patterns
|
||||||
|
|
||||||
|
1. **Test file naming**: Use `*_test.go` suffix
|
||||||
|
2. **Test function naming**: Use `TestFunctionName` pattern
|
||||||
|
3. **Use testify/assert**: Prefer `assert` package for assertions
|
||||||
|
4. **Table-driven tests**: Use table-driven tests for multiple scenarios
|
||||||
|
5. **Mock interfaces**: Use `go.uber.org/mock` for mocking
|
||||||
|
6. **Test helpers**: Use `redistest`, `mongtest` helpers for database testing
|
||||||
|
|
||||||
|
Example test pattern:
|
||||||
|
```go
|
||||||
|
func TestSomething(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expected string
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{"valid case", "input", "output", false},
|
||||||
|
{"error case", "bad", "", true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result, err := SomeFunction(tt.input)
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, tt.expected, result)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Framework-Specific Guidelines
|
||||||
|
|
||||||
|
### REST API Development
|
||||||
|
|
||||||
|
1. **API Definition**: Use `.api` files to define REST APIs with goctl codegen
|
||||||
|
2. **Handler pattern**: Separate business logic into logic packages (handlers call logic layer)
|
||||||
|
3. **Middleware chain**: Middlewares wrap via `chain.Chain` interface - use `Append()` or `Prepend()` to control order
|
||||||
|
- Built-in middlewares (all in `rest/handler/`): tracing, logging, metrics, recovery, breaker, shedding, timeout, maxconns, maxbytes, gunzip
|
||||||
|
- Custom middleware: `func(http.Handler) http.Handler` - call `next.ServeHTTP(w, r)` to continue chain
|
||||||
|
4. **Response handling**: Use `httpx.WriteJson(w, code, v)` for JSON responses
|
||||||
|
5. **Error handling**: Use `httpx.Error(w, err)` or `httpx.ErrorCtx(ctx, w, err)` for HTTP error responses
|
||||||
|
6. **Route registration**: Routes defined with `Method`, `Path`, and `Handler` - wildcards use `:param` syntax
|
||||||
|
|
||||||
|
### RPC Development
|
||||||
|
|
||||||
|
1. **Protocol Buffers**: Use protobuf for service definitions, generate code with goctl
|
||||||
|
2. **Service discovery**: Use etcd for dynamic service registration/discovery, or direct endpoints for static routing
|
||||||
|
3. **Load balancing**: Default is `p2c_ewma` (power of 2 choices with EWMA), configurable via `BalancerName`
|
||||||
|
4. **Client configuration**: Support `Etcd`, `Endpoints`, or `Target` - use `BuildTarget()` to construct connection string
|
||||||
|
5. **Interceptors**: Implement gRPC interceptors for cross-cutting concerns (auth, logging, metrics)
|
||||||
|
6. **Health checks**: gRPC health checks enabled by default (`Health: true`)
|
||||||
|
|
||||||
|
### Database Operations
|
||||||
|
|
||||||
|
1. **SQL operations**: Use `sqlx.SqlConn` interface - methods always end with `Ctx` for context support
|
||||||
|
2. **Caching pattern**: `stores/sqlc` provides `CachedConn` for automatic cache-aside pattern
|
||||||
|
- `QueryRowCtx`: Query with cache key, auto-populate on cache miss
|
||||||
|
- `ExecCtx`: Execute and delete cache keys
|
||||||
|
3. **Transactions**: Use `sqlx.SqlConn.TransactCtx()` to get transaction session
|
||||||
|
4. **Connection pooling**: Managed automatically (64 max idle/open, 1min lifetime)
|
||||||
|
5. **Test helpers**: Use `redistest.CreateRedis(t)` for Redis, SQL mocks for DB testing
|
||||||
|
|
||||||
|
Example cache pattern:
|
||||||
|
```go
|
||||||
|
err := c.QueryRowCtx(ctx, &dest, key, func(ctx context.Context, conn sqlx.SqlConn) error {
|
||||||
|
return conn.QueryRowCtx(ctx, &dest, query, args...)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Management
|
||||||
|
|
||||||
|
1. **YAML configuration**: Use YAML for configuration files
|
||||||
|
2. **Environment variables**: Support environment variable overrides
|
||||||
|
3. **Validation**: Include proper validation for configuration parameters
|
||||||
|
4. **Sensible defaults**: Provide reasonable default values
|
||||||
|
|
||||||
|
## Error Handling Best Practices
|
||||||
|
|
||||||
|
1. **Wrap errors**: Use `fmt.Errorf` with `%w` verb to wrap errors
|
||||||
|
2. **Custom errors**: Define custom error types when needed
|
||||||
|
3. **Error logging**: Log errors appropriately with context
|
||||||
|
4. **Graceful degradation**: Implement fallback mechanisms
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
1. **Resource pools**: Use connection pools and worker pools
|
||||||
|
2. **Circuit breakers**: Implement circuit breaker patterns for external calls
|
||||||
|
3. **Rate limiting**: Apply rate limiting to protect services
|
||||||
|
4. **Load shedding**: Implement adaptive load shedding
|
||||||
|
5. **Metrics**: Add appropriate metrics and monitoring
|
||||||
|
|
||||||
|
## Security Guidelines
|
||||||
|
|
||||||
|
1. **Input validation**: Validate all input parameters
|
||||||
|
2. **SQL injection prevention**: Use parameterized queries
|
||||||
|
3. **Authentication**: Implement proper JWT token handling
|
||||||
|
4. **HTTPS**: Support TLS/HTTPS configurations
|
||||||
|
5. **CORS**: Configure CORS appropriately for web APIs
|
||||||
|
|
||||||
|
## Documentation Standards
|
||||||
|
|
||||||
|
1. **Package documentation**: Include package-level documentation
|
||||||
|
2. **Function documentation**: Document exported functions with examples
|
||||||
|
3. **API documentation**: Maintain API documentation in sync
|
||||||
|
4. **README updates**: Update README for significant changes
|
||||||
|
|
||||||
|
## GitHub Issue Management
|
||||||
|
|
||||||
|
### Understanding and Categorizing Issues
|
||||||
|
|
||||||
|
When analyzing GitHub issues, consider these common categories:
|
||||||
|
|
||||||
|
1. **Bug Reports**: Stack traces, version info, reproduction steps
|
||||||
|
2. **Feature Requests**: Use case, proposed solution, alternatives
|
||||||
|
3. **Questions**: Usage, configuration, or architecture
|
||||||
|
4. **Documentation Issues**: Missing, unclear, or incorrect docs
|
||||||
|
5. **Performance Issues**: Benchmarks, profiling data, resource usage
|
||||||
|
|
||||||
|
### Issue Analysis Checklist
|
||||||
|
|
||||||
|
- Identify affected component (REST, RPC, Gateway, MCP, Core utilities, goctl)
|
||||||
|
- Check versions (go-zero, Go)
|
||||||
|
- Look for reproduction steps or code examples
|
||||||
|
- Review code snippets, logs, or stack traces
|
||||||
|
- Check if related to resilience features (breaker, load shedding, rate limiting)
|
||||||
|
- Determine production impact
|
||||||
|
|
||||||
|
### Responding to Issues
|
||||||
|
|
||||||
|
Be helpful and professional. Ask clarifying questions when needed. Reference relevant documentation and code files. Provide code examples following project conventions. Suggest workarounds when applicable.
|
||||||
|
|
||||||
|
### Chinese to English Translation
|
||||||
|
|
||||||
|
go-zero has an international user base. When encountering issues or comments written in Chinese, translate them to English to ensure all contributors can participate in discussions.
|
||||||
|
|
||||||
|
#### Translation Guidelines
|
||||||
|
|
||||||
|
1. **Update issue titles**: Edit the issue title to include English translation only
|
||||||
|
2. **Translate comments in place**: Add a comment with the English translation, followed by the original Chinese text
|
||||||
|
3. **Keep original Chinese**: After translating, include the original Chinese text in a blockquote for verification
|
||||||
|
4. **Encourage English communication**: Politely suggest users write in English for better collaboration
|
||||||
|
5. **Maintain technical accuracy**: Preserve technical terms, component names, and code exactly
|
||||||
|
6. **Translate naturally**: Avoid literal word-by-word translation; use idiomatic English
|
||||||
|
7. **Preserve formatting**: Keep markdown formatting, code blocks, and links intact
|
||||||
|
8. **Keep URLs unchanged**: Don't translate URLs or file paths
|
||||||
|
|
||||||
|
#### Common Technical Terms (Chinese → English)
|
||||||
|
|
||||||
|
- 框架 → **Framework** | 中间件 → **Middleware** | 负载均衡 → **Load Balancing**
|
||||||
|
- 熔断器 → **Circuit Breaker** | 限流 → **Rate Limiting** | 降载/过载保护 → **Load Shedding**
|
||||||
|
- 服务发现 → **Service Discovery** | 配置 → **Configuration** | 弹性/容错 → **Resilience** | 微服务 → **Microservices**
|
||||||
|
|
||||||
|
#### Translation Example
|
||||||
|
|
||||||
|
**Original Chinese Title:** `goctl 执行环境问题`
|
||||||
|
**Updated Title:** `goctl Execution Environment Issue`
|
||||||
|
|
||||||
|
**Original Chinese Comment:** `我在项目中遇到熔断器配置问题`
|
||||||
|
**Translation in Comment:**
|
||||||
|
```markdown
|
||||||
|
I encountered a circuit breaker configuration issue in my project.
|
||||||
|
|
||||||
|
> Original (原文): 我在项目中遇到熔断器配置问题
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Issue Patterns and Solutions
|
||||||
|
|
||||||
|
#### Configuration Issues
|
||||||
|
- Check `service.ServiceConf` embedding and struct tags
|
||||||
|
- Verify YAML syntax, defaults, and validation rules
|
||||||
|
- Reference: [rest/config.go](rest/config.go), [zrpc/config.go](zrpc/config.go)
|
||||||
|
|
||||||
|
#### Code Generation (goctl) Issues
|
||||||
|
- Verify `.api` or `.proto` file syntax and goctl version
|
||||||
|
- Reference: `tools/goctl/` directory
|
||||||
|
|
||||||
|
#### RPC Connection Issues
|
||||||
|
- Check etcd configuration, service discovery, and endpoints
|
||||||
|
- Verify load balancing settings (p2c_ewma)
|
||||||
|
|
||||||
|
#### Database/Cache Issues
|
||||||
|
- Verify `sqlx.SqlConn` usage with context
|
||||||
|
- Check cache key generation, invalidation, and connection pools
|
||||||
|
- Use test helpers (`redistest`, `mongtest`)
|
||||||
|
|
||||||
|
#### Performance Issues
|
||||||
|
- Check if load shedding is enabled (mode: `pre`/`pro`)
|
||||||
|
- Review circuit breaker thresholds, rate limiting, and context timeouts
|
||||||
|
|
||||||
|
### Referencing Codebase
|
||||||
|
|
||||||
|
When explaining issues, reference specific files and patterns:
|
||||||
|
- REST API: `rest/`, `rest/handler/`, `rest/httpx/`
|
||||||
|
- RPC: `zrpc/`, `zrpc/internal/`
|
||||||
|
- Core utilities: `core/breaker/`, `core/limit/`, `core/load/`, etc.
|
||||||
|
- Gateway: `gateway/`
|
||||||
|
- MCP: `mcp/`
|
||||||
|
- Code generation: `tools/goctl/`
|
||||||
|
- Examples: `adhoc/` directory contains various examples
|
||||||
|
|
||||||
|
### Encouraging Best Practices
|
||||||
|
|
||||||
|
When responding to issues, gently guide users toward:
|
||||||
|
- Proper error handling with context
|
||||||
|
- Using resilience features (breakers, rate limiters)
|
||||||
|
- Following testing patterns with table-driven tests
|
||||||
|
- Implementing proper resource cleanup
|
||||||
|
- Reading existing documentation in `docs/` and `readme.md`
|
||||||
|
|
||||||
|
## Common Patterns to Follow
|
||||||
|
|
||||||
|
### Service Configuration
|
||||||
|
```go
|
||||||
|
type ServiceConf struct {
|
||||||
|
Name string
|
||||||
|
Log logx.LogConf
|
||||||
|
Mode string `json:",default=pro,options=[dev,test,pre,pro]"`
|
||||||
|
// ... other common fields
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Middleware Implementation
|
||||||
|
```go
|
||||||
|
func SomeMiddleware() rest.Middleware {
|
||||||
|
return func(next http.HandlerFunc) http.HandlerFunc {
|
||||||
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// Pre-processing
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
// Post-processing
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resource Management
|
||||||
|
Always implement proper resource cleanup using defer and context cancellation.
|
||||||
|
|
||||||
|
## Build and Test Commands
|
||||||
|
|
||||||
|
- Build: `go build ./...`
|
||||||
|
- Test: `go test ./...`
|
||||||
|
- Test with race detection: `go test -race ./...`
|
||||||
|
- Format: `gofmt -w .`
|
||||||
|
- Code generation:
|
||||||
|
- REST API: `goctl api go -api *.api -dir .`
|
||||||
|
- RPC: `goctl rpc protoc *.proto --go_out=. --go-grpc_out=. --zrpc_out=.`
|
||||||
|
- Model from SQL: `goctl model mysql datasource -url="user:pass@tcp(host:port)/db" -table="*" -dir="./model"`
|
||||||
|
|
||||||
|
## Critical Architecture Patterns
|
||||||
|
|
||||||
|
### Resilience Design Philosophy
|
||||||
|
go-zero implements defense-in-depth with multiple protection layers:
|
||||||
|
1. **Circuit Breaker** (`core/breaker`): Google SRE breaker - tracks success/failure, opens on error threshold
|
||||||
|
2. **Adaptive Load Shedding** (`core/load`): CPU-based auto-rejection when system overloaded (disabled in dev/test/rt modes)
|
||||||
|
3. **Rate Limiting** (`core/limit`): Token bucket (Redis-based) and period limiters
|
||||||
|
4. **Timeout Control**: Cascading timeouts via context - set at multiple levels (client, server, handler)
|
||||||
|
|
||||||
|
### Middleware Chain Architecture
|
||||||
|
`rest/chain` provides middleware composition:
|
||||||
|
```go
|
||||||
|
// Middleware signature
|
||||||
|
type Middleware func(http.Handler) http.Handler
|
||||||
|
|
||||||
|
// Chain operations
|
||||||
|
chain := chain.New(m1, m2)
|
||||||
|
chain.Append(m3) // Adds to end: m1 -> m2 -> m3
|
||||||
|
chain.Prepend(m0) // Adds to start: m0 -> m1 -> m2 -> m3
|
||||||
|
handler := chain.Then(finalHandler)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Concurrency Patterns
|
||||||
|
- **MapReduce** (`core/mr`): Parallel processing with worker pools - use for batch operations
|
||||||
|
- **Executors** (`core/executors`): Bulk/period executors for batching operations
|
||||||
|
- **SingleFlight** (`core/syncx`): Deduplicates concurrent identical requests
|
||||||
|
|
||||||
|
Remember to run tests and ensure all checks pass before submitting changes. The project emphasizes high quality, performance, and reliability, so these should be primary considerations in all development work.
|
||||||
8
.github/workflows/codeql-analysis.yml
vendored
8
.github/workflows/codeql-analysis.yml
vendored
@@ -35,11 +35,11 @@ jobs:
|
|||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v5
|
uses: actions/checkout@v6
|
||||||
|
|
||||||
# Initializes the CodeQL tools for scanning.
|
# Initializes the CodeQL tools for scanning.
|
||||||
- name: Initialize CodeQL
|
- name: Initialize CodeQL
|
||||||
uses: github/codeql-action/init@v3
|
uses: github/codeql-action/init@v4
|
||||||
with:
|
with:
|
||||||
languages: ${{ matrix.language }}
|
languages: ${{ matrix.language }}
|
||||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||||
@@ -50,7 +50,7 @@ jobs:
|
|||||||
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
|
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
|
||||||
# If this step fails, then you should remove it and run the build manually (see below)
|
# If this step fails, then you should remove it and run the build manually (see below)
|
||||||
- name: Autobuild
|
- name: Autobuild
|
||||||
uses: github/codeql-action/autobuild@v3
|
uses: github/codeql-action/autobuild@v4
|
||||||
|
|
||||||
# ℹ️ Command-line programs to run using the OS shell.
|
# ℹ️ Command-line programs to run using the OS shell.
|
||||||
# 📚 https://git.io/JvXDl
|
# 📚 https://git.io/JvXDl
|
||||||
@@ -64,4 +64,4 @@ jobs:
|
|||||||
# make release
|
# make release
|
||||||
|
|
||||||
- name: Perform CodeQL Analysis
|
- name: Perform CodeQL Analysis
|
||||||
uses: github/codeql-action/analyze@v3
|
uses: github/codeql-action/analyze@v4
|
||||||
|
|||||||
10
.github/workflows/go.yml
vendored
10
.github/workflows/go.yml
vendored
@@ -12,10 +12,10 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Check out code into the Go module directory
|
- name: Check out code into the Go module directory
|
||||||
uses: actions/checkout@v5
|
uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up Go 1.x
|
- name: Set up Go 1.x
|
||||||
uses: actions/setup-go@v5
|
uses: actions/setup-go@v6
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
go-version-file: go.mod
|
||||||
check-latest: true
|
check-latest: true
|
||||||
@@ -40,7 +40,7 @@ jobs:
|
|||||||
run: go test -race -coverprofile=coverage.txt -covermode=atomic ./...
|
run: go test -race -coverprofile=coverage.txt -covermode=atomic ./...
|
||||||
|
|
||||||
- name: Codecov
|
- name: Codecov
|
||||||
uses: codecov/codecov-action@v5
|
uses: codecov/codecov-action@v6
|
||||||
with:
|
with:
|
||||||
files: ./coverage.txt
|
files: ./coverage.txt
|
||||||
flags: unittests
|
flags: unittests
|
||||||
@@ -52,10 +52,10 @@ jobs:
|
|||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout codebase
|
- name: Checkout codebase
|
||||||
uses: actions/checkout@v5
|
uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up Go 1.x
|
- name: Set up Go 1.x
|
||||||
uses: actions/setup-go@v5
|
uses: actions/setup-go@v6
|
||||||
with:
|
with:
|
||||||
# make sure Go version compatible with go-zero
|
# make sure Go version compatible with go-zero
|
||||||
go-version-file: go.mod
|
go-version-file: go.mod
|
||||||
|
|||||||
2
.github/workflows/issues.yml
vendored
2
.github/workflows/issues.yml
vendored
@@ -7,7 +7,7 @@ jobs:
|
|||||||
close-issues:
|
close-issues:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/stale@v9
|
- uses: actions/stale@v10
|
||||||
with:
|
with:
|
||||||
days-before-issue-stale: 365
|
days-before-issue-stale: 365
|
||||||
days-before-issue-close: 90
|
days-before-issue-close: 90
|
||||||
|
|||||||
2
.github/workflows/release.yaml
vendored
2
.github/workflows/release.yaml
vendored
@@ -16,7 +16,7 @@ jobs:
|
|||||||
- goarch: "386"
|
- goarch: "386"
|
||||||
goos: darwin
|
goos: darwin
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v6
|
||||||
- uses: zeromicro/go-zero-release-action@master
|
- uses: zeromicro/go-zero-release-action@master
|
||||||
with:
|
with:
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|||||||
7
.github/workflows/reviewdog.yml
vendored
7
.github/workflows/reviewdog.yml
vendored
@@ -5,7 +5,12 @@ jobs:
|
|||||||
name: runner / staticcheck
|
name: runner / staticcheck
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v6
|
||||||
|
- uses: actions/setup-go@v6
|
||||||
|
with:
|
||||||
|
go-version-file: go.mod
|
||||||
|
check-latest: true
|
||||||
|
cache: true
|
||||||
- uses: reviewdog/action-staticcheck@v1
|
- uses: reviewdog/action-staticcheck@v1
|
||||||
with:
|
with:
|
||||||
github_token: ${{ secrets.github_token }}
|
github_token: ${{ secrets.github_token }}
|
||||||
|
|||||||
4
.github/workflows/version-check.yml
vendored
4
.github/workflows/version-check.yml
vendored
@@ -10,10 +10,10 @@ jobs:
|
|||||||
version-check:
|
version-check:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v5
|
uses: actions/setup-go@v6
|
||||||
with:
|
with:
|
||||||
go-version: '1.21'
|
go-version: '1.21'
|
||||||
|
|
||||||
|
|||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -17,6 +17,7 @@
|
|||||||
**/logs
|
**/logs
|
||||||
**/adhoc
|
**/adhoc
|
||||||
**/coverage.txt
|
**/coverage.txt
|
||||||
|
**/WARP.md
|
||||||
|
|
||||||
# for test purpose
|
# for test purpose
|
||||||
go.work
|
go.work
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ type (
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
// New create a Filter, store is the backed redis, key is the key for the bloom filter,
|
// New creates a Filter, store is the backed redis, key is the key for the bloom filter,
|
||||||
// bits is how many bits will be used, maps is how many hashes for each addition.
|
// bits is how many bits will be used, maps is how many hashes for each addition.
|
||||||
// best practices:
|
// best practices:
|
||||||
// elements - means how many actual elements
|
// elements - means how many actual elements
|
||||||
|
|||||||
@@ -6,8 +6,6 @@ import (
|
|||||||
"crypto/cipher"
|
"crypto/cipher"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/logx"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// ErrPaddingSize indicates bad padding size.
|
// ErrPaddingSize indicates bad padding size.
|
||||||
@@ -27,7 +25,8 @@ func newECB(b cipher.Block) *ecb {
|
|||||||
|
|
||||||
type ecbEncrypter ecb
|
type ecbEncrypter ecb
|
||||||
|
|
||||||
// NewECBEncrypter returns an ECB encrypter.
|
// Deprecated: NewECBEncrypter returns an ECB encrypter.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func NewECBEncrypter(b cipher.Block) cipher.BlockMode {
|
func NewECBEncrypter(b cipher.Block) cipher.BlockMode {
|
||||||
return (*ecbEncrypter)(newECB(b))
|
return (*ecbEncrypter)(newECB(b))
|
||||||
}
|
}
|
||||||
@@ -39,12 +38,10 @@ func (x *ecbEncrypter) BlockSize() int { return x.blockSize }
|
|||||||
// the block size. Dst and src must overlap entirely or not at all.
|
// the block size. Dst and src must overlap entirely or not at all.
|
||||||
func (x *ecbEncrypter) CryptBlocks(dst, src []byte) {
|
func (x *ecbEncrypter) CryptBlocks(dst, src []byte) {
|
||||||
if len(src)%x.blockSize != 0 {
|
if len(src)%x.blockSize != 0 {
|
||||||
logx.Error("crypto/cipher: input not full blocks")
|
panic("crypto/cipher: input not full blocks")
|
||||||
return
|
|
||||||
}
|
}
|
||||||
if len(dst) < len(src) {
|
if len(dst) < len(src) {
|
||||||
logx.Error("crypto/cipher: output smaller than input")
|
panic("crypto/cipher: output smaller than input")
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for len(src) > 0 {
|
for len(src) > 0 {
|
||||||
@@ -56,7 +53,8 @@ func (x *ecbEncrypter) CryptBlocks(dst, src []byte) {
|
|||||||
|
|
||||||
type ecbDecrypter ecb
|
type ecbDecrypter ecb
|
||||||
|
|
||||||
// NewECBDecrypter returns an ECB decrypter.
|
// Deprecated: NewECBDecrypter returns an ECB decrypter.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func NewECBDecrypter(b cipher.Block) cipher.BlockMode {
|
func NewECBDecrypter(b cipher.Block) cipher.BlockMode {
|
||||||
return (*ecbDecrypter)(newECB(b))
|
return (*ecbDecrypter)(newECB(b))
|
||||||
}
|
}
|
||||||
@@ -70,12 +68,10 @@ func (x *ecbDecrypter) BlockSize() int {
|
|||||||
// the block size. Dst and src must overlap entirely or not at all.
|
// the block size. Dst and src must overlap entirely or not at all.
|
||||||
func (x *ecbDecrypter) CryptBlocks(dst, src []byte) {
|
func (x *ecbDecrypter) CryptBlocks(dst, src []byte) {
|
||||||
if len(src)%x.blockSize != 0 {
|
if len(src)%x.blockSize != 0 {
|
||||||
logx.Error("crypto/cipher: input not full blocks")
|
panic("crypto/cipher: input not full blocks")
|
||||||
return
|
|
||||||
}
|
}
|
||||||
if len(dst) < len(src) {
|
if len(dst) < len(src) {
|
||||||
logx.Error("crypto/cipher: output smaller than input")
|
panic("crypto/cipher: output smaller than input")
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for len(src) > 0 {
|
for len(src) > 0 {
|
||||||
@@ -85,14 +81,18 @@ func (x *ecbDecrypter) CryptBlocks(dst, src []byte) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// EcbDecrypt decrypts src with the given key.
|
// Deprecated: EcbDecrypt decrypts src with the given key.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func EcbDecrypt(key, src []byte) ([]byte, error) {
|
func EcbDecrypt(key, src []byte) ([]byte, error) {
|
||||||
block, err := aes.NewCipher(key)
|
block, err := aes.NewCipher(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logx.Errorf("Decrypt key error: % x", key)
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(src)%block.BlockSize() != 0 {
|
||||||
|
return nil, ErrPaddingSize
|
||||||
|
}
|
||||||
|
|
||||||
decrypter := NewECBDecrypter(block)
|
decrypter := NewECBDecrypter(block)
|
||||||
decrypted := make([]byte, len(src))
|
decrypted := make([]byte, len(src))
|
||||||
decrypter.CryptBlocks(decrypted, src)
|
decrypter.CryptBlocks(decrypted, src)
|
||||||
@@ -100,8 +100,9 @@ func EcbDecrypt(key, src []byte) ([]byte, error) {
|
|||||||
return pkcs5Unpadding(decrypted, decrypter.BlockSize())
|
return pkcs5Unpadding(decrypted, decrypter.BlockSize())
|
||||||
}
|
}
|
||||||
|
|
||||||
// EcbDecryptBase64 decrypts base64 encoded src with the given base64 encoded key.
|
// Deprecated: EcbDecryptBase64 decrypts base64 encoded src with the given base64 encoded key.
|
||||||
// The returned string is also base64 encoded.
|
// The returned string is also base64 encoded.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func EcbDecryptBase64(key, src string) (string, error) {
|
func EcbDecryptBase64(key, src string) (string, error) {
|
||||||
keyBytes, err := getKeyBytes(key)
|
keyBytes, err := getKeyBytes(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -121,11 +122,11 @@ func EcbDecryptBase64(key, src string) (string, error) {
|
|||||||
return base64.StdEncoding.EncodeToString(decryptedBytes), nil
|
return base64.StdEncoding.EncodeToString(decryptedBytes), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// EcbEncrypt encrypts src with the given key.
|
// Deprecated: EcbEncrypt encrypts src with the given key.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func EcbEncrypt(key, src []byte) ([]byte, error) {
|
func EcbEncrypt(key, src []byte) ([]byte, error) {
|
||||||
block, err := aes.NewCipher(key)
|
block, err := aes.NewCipher(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logx.Errorf("Encrypt key error: % x", key)
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -137,8 +138,9 @@ func EcbEncrypt(key, src []byte) ([]byte, error) {
|
|||||||
return crypted, nil
|
return crypted, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// EcbEncryptBase64 encrypts base64 encoded src with the given base64 encoded key.
|
// Deprecated: EcbEncryptBase64 encrypts base64 encoded src with the given base64 encoded key.
|
||||||
// The returned string is also base64 encoded.
|
// The returned string is also base64 encoded.
|
||||||
|
// ECB mode is insecure for multi-block data. Use AES-GCM instead.
|
||||||
func EcbEncryptBase64(key, src string) (string, error) {
|
func EcbEncryptBase64(key, src string) (string, error) {
|
||||||
keyBytes, err := getKeyBytes(key)
|
keyBytes, err := getKeyBytes(key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -179,10 +181,20 @@ func pkcs5Padding(ciphertext []byte, blockSize int) []byte {
|
|||||||
|
|
||||||
func pkcs5Unpadding(src []byte, blockSize int) ([]byte, error) {
|
func pkcs5Unpadding(src []byte, blockSize int) ([]byte, error) {
|
||||||
length := len(src)
|
length := len(src)
|
||||||
unpadding := int(src[length-1])
|
if length == 0 {
|
||||||
if unpadding >= length || unpadding > blockSize {
|
|
||||||
return nil, ErrPaddingSize
|
return nil, ErrPaddingSize
|
||||||
}
|
}
|
||||||
|
|
||||||
|
unpadding := int(src[length-1])
|
||||||
|
if unpadding < 1 || unpadding > blockSize || unpadding > length {
|
||||||
|
return nil, ErrPaddingSize
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, b := range src[length-unpadding:] {
|
||||||
|
if int(b) != unpadding {
|
||||||
|
return nil, ErrPaddingSize
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return src[:length-unpadding], nil
|
return src[:length-unpadding], nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,8 +28,8 @@ func TestAesEcb(t *testing.T) {
|
|||||||
_, err = EcbDecrypt(badKey2, dst)
|
_, err = EcbDecrypt(badKey2, dst)
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
_, err = EcbDecrypt(key, val)
|
_, err = EcbDecrypt(key, val)
|
||||||
// not enough block, just nil
|
// not a multiple of block size
|
||||||
assert.Nil(t, err)
|
assert.NotNil(t, err)
|
||||||
src, err := EcbDecrypt(key, dst)
|
src, err := EcbDecrypt(key, dst)
|
||||||
assert.Nil(t, err)
|
assert.Nil(t, err)
|
||||||
assert.Equal(t, val, src)
|
assert.Equal(t, val, src)
|
||||||
@@ -41,33 +41,28 @@ func TestAesEcb(t *testing.T) {
|
|||||||
assert.Equal(t, 16, decrypter.BlockSize())
|
assert.Equal(t, 16, decrypter.BlockSize())
|
||||||
|
|
||||||
dst = make([]byte, 8)
|
dst = make([]byte, 8)
|
||||||
encrypter.CryptBlocks(dst, val)
|
assert.Panics(t, func() {
|
||||||
for _, b := range dst {
|
encrypter.CryptBlocks(dst, val)
|
||||||
assert.Equal(t, byte(0), b)
|
})
|
||||||
}
|
|
||||||
|
|
||||||
dst = make([]byte, 8)
|
dst = make([]byte, 8)
|
||||||
encrypter.CryptBlocks(dst, valLong)
|
assert.Panics(t, func() {
|
||||||
for _, b := range dst {
|
encrypter.CryptBlocks(dst, valLong)
|
||||||
assert.Equal(t, byte(0), b)
|
})
|
||||||
}
|
|
||||||
|
|
||||||
dst = make([]byte, 8)
|
dst = make([]byte, 8)
|
||||||
decrypter.CryptBlocks(dst, val)
|
assert.Panics(t, func() {
|
||||||
for _, b := range dst {
|
decrypter.CryptBlocks(dst, val)
|
||||||
assert.Equal(t, byte(0), b)
|
})
|
||||||
}
|
|
||||||
|
|
||||||
dst = make([]byte, 8)
|
dst = make([]byte, 8)
|
||||||
decrypter.CryptBlocks(dst, valLong)
|
assert.Panics(t, func() {
|
||||||
for _, b := range dst {
|
decrypter.CryptBlocks(dst, valLong)
|
||||||
assert.Equal(t, byte(0), b)
|
})
|
||||||
}
|
|
||||||
|
|
||||||
_, err = EcbEncryptBase64("cTR0N3dDKkYtSmFOZFJnVWpYbjJyNXU4eC9BP0QK", "aGVsbG93b3JsZGxvbmcuLgo=")
|
_, err = EcbEncryptBase64("cTR0N3dDKkYtSmFOZFJnVWpYbjJyNXU4eC9BP0QK", "aGVsbG93b3JsZGxvbmcuLgo=")
|
||||||
assert.Error(t, err)
|
assert.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAesEcbBase64(t *testing.T) {
|
func TestAesEcbBase64(t *testing.T) {
|
||||||
const (
|
const (
|
||||||
val = "hello"
|
val = "hello"
|
||||||
@@ -98,3 +93,44 @@ func TestAesEcbBase64(t *testing.T) {
|
|||||||
assert.Nil(t, err)
|
assert.Nil(t, err)
|
||||||
assert.Equal(t, val, string(b))
|
assert.Equal(t, val, string(b))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPkcs5UnpaddingEmptyInput(t *testing.T) {
|
||||||
|
_, err := pkcs5Unpadding([]byte{}, 16)
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPkcs5UnpaddingMalformedPadding(t *testing.T) {
|
||||||
|
// Valid PKCS5 padding of 3: last 3 bytes should all be 0x03
|
||||||
|
// Here we corrupt one padding byte
|
||||||
|
malformed := []byte{0x41, 0x41, 0x41, 0x41, 0x41, 0x41, 0x41, 0x41,
|
||||||
|
0x41, 0x41, 0x41, 0x41, 0x41, 0x02, 0x03, 0x03}
|
||||||
|
_, err := pkcs5Unpadding(malformed, 16)
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
|
||||||
|
// All padding bytes correct
|
||||||
|
valid := []byte{0x41, 0x41, 0x41, 0x41, 0x41, 0x41, 0x41, 0x41,
|
||||||
|
0x41, 0x41, 0x41, 0x41, 0x41, 0x03, 0x03, 0x03}
|
||||||
|
result, err := pkcs5Unpadding(valid, 16)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, valid[:13], result)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPkcs5UnpaddingInvalidPaddingValue(t *testing.T) {
|
||||||
|
// padding value = 0 (< 1)
|
||||||
|
_, err := pkcs5Unpadding([]byte{0x41, 0x00}, 16)
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
|
||||||
|
// padding value > blockSize
|
||||||
|
_, err = pkcs5Unpadding([]byte{0x41, 0x41, 0x41, 0x41, 17}, 4)
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
|
||||||
|
// padding value > length
|
||||||
|
_, err = pkcs5Unpadding([]byte{0x41, 0x03}, 16)
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEcbDecryptEmptyInput(t *testing.T) {
|
||||||
|
key := []byte("q4t7w!z%C*F-JaNdRgUjXn2r5u8x/A?D")
|
||||||
|
_, err := EcbDecrypt(key, []byte{})
|
||||||
|
assert.Equal(t, ErrPaddingSize, err)
|
||||||
|
}
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ func ComputeKey(pubKey, priKey *big.Int) (*big.Int, error) {
|
|||||||
return nil, ErrInvalidPubKey
|
return nil, ErrInvalidPubKey
|
||||||
}
|
}
|
||||||
|
|
||||||
if pubKey.Sign() <= 0 && p.Cmp(pubKey) <= 0 {
|
if pubKey.Sign() <= 0 || p.Cmp(pubKey) <= 0 {
|
||||||
return nil, ErrPubKeyOutOfBound
|
return nil, ErrPubKeyOutOfBound
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -94,3 +94,32 @@ func TestDHOnErrors(t *testing.T) {
|
|||||||
|
|
||||||
assert.NotNil(t, NewPublicKey([]byte("")))
|
assert.NotNil(t, NewPublicKey([]byte("")))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestDHPubKeyBoundary(t *testing.T) {
|
||||||
|
key, err := GenerateKey()
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
// pubKey = 0 should be rejected
|
||||||
|
_, err = ComputeKey(big.NewInt(0), key.PriKey)
|
||||||
|
assert.ErrorIs(t, err, ErrPubKeyOutOfBound)
|
||||||
|
|
||||||
|
// pubKey = -1 should be rejected
|
||||||
|
_, err = ComputeKey(big.NewInt(-1), key.PriKey)
|
||||||
|
assert.ErrorIs(t, err, ErrPubKeyOutOfBound)
|
||||||
|
|
||||||
|
// pubKey = p should be rejected
|
||||||
|
_, err = ComputeKey(new(big.Int).Set(p), key.PriKey)
|
||||||
|
assert.ErrorIs(t, err, ErrPubKeyOutOfBound)
|
||||||
|
|
||||||
|
// pubKey = p+1 should be rejected
|
||||||
|
_, err = ComputeKey(new(big.Int).Add(p, big.NewInt(1)), key.PriKey)
|
||||||
|
assert.ErrorIs(t, err, ErrPubKeyOutOfBound)
|
||||||
|
|
||||||
|
// pubKey = 1 should be accepted
|
||||||
|
_, err = ComputeKey(big.NewInt(1), key.PriKey)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
// pubKey = p-1 should be accepted
|
||||||
|
_, err = ComputeKey(new(big.Int).Sub(p, big.NewInt(1)), key.PriKey)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package codec
|
|||||||
import (
|
import (
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
"crypto/rsa"
|
"crypto/rsa"
|
||||||
|
"crypto/sha256"
|
||||||
"crypto/x509"
|
"crypto/x509"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/pem"
|
"encoding/pem"
|
||||||
@@ -46,7 +47,9 @@ type (
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
// NewRsaDecrypter returns a RsaDecrypter with the given file.
|
// Deprecated: NewRsaDecrypter returns a RsaDecrypter with the given file.
|
||||||
|
// PKCS#1 v1.5 padding is vulnerable to padding oracle attacks.
|
||||||
|
// Use NewRsaOAEPDecrypter instead.
|
||||||
func NewRsaDecrypter(file string) (RsaDecrypter, error) {
|
func NewRsaDecrypter(file string) (RsaDecrypter, error) {
|
||||||
content, err := os.ReadFile(file)
|
content, err := os.ReadFile(file)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -90,7 +93,9 @@ func (r *rsaDecrypter) DecryptBase64(input string) ([]byte, error) {
|
|||||||
return r.Decrypt(base64Decoded)
|
return r.Decrypt(base64Decoded)
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRsaEncrypter returns a RsaEncrypter with the given key.
|
// Deprecated: NewRsaEncrypter returns a RsaEncrypter with the given key.
|
||||||
|
// PKCS#1 v1.5 padding is vulnerable to padding oracle attacks.
|
||||||
|
// Use NewRsaOAEPEncrypter instead.
|
||||||
func NewRsaEncrypter(key []byte) (RsaEncrypter, error) {
|
func NewRsaEncrypter(key []byte) (RsaEncrypter, error) {
|
||||||
block, _ := pem.Decode(key)
|
block, _ := pem.Decode(key)
|
||||||
if block == nil {
|
if block == nil {
|
||||||
@@ -154,3 +159,90 @@ func rsaDecryptBlock(privateKey *rsa.PrivateKey, block []byte) ([]byte, error) {
|
|||||||
func rsaEncryptBlock(publicKey *rsa.PublicKey, msg []byte) ([]byte, error) {
|
func rsaEncryptBlock(publicKey *rsa.PublicKey, msg []byte) ([]byte, error) {
|
||||||
return rsa.EncryptPKCS1v15(rand.Reader, publicKey, msg)
|
return rsa.EncryptPKCS1v15(rand.Reader, publicKey, msg)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewRsaOAEPDecrypter returns a RsaDecrypter using OAEP with SHA-256.
|
||||||
|
func NewRsaOAEPDecrypter(file string) (RsaDecrypter, error) {
|
||||||
|
content, err := os.ReadFile(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
block, _ := pem.Decode(content)
|
||||||
|
if block == nil {
|
||||||
|
return nil, ErrPrivateKey
|
||||||
|
}
|
||||||
|
|
||||||
|
privateKey, err := x509.ParsePKCS1PrivateKey(block.Bytes)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &rsaOAEPDecrypter{
|
||||||
|
rsaBase: rsaBase{
|
||||||
|
bytesLimit: privateKey.N.BitLen() >> 3,
|
||||||
|
},
|
||||||
|
privateKey: privateKey,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewRsaOAEPEncrypter returns a RsaEncrypter using OAEP with SHA-256.
|
||||||
|
func NewRsaOAEPEncrypter(key []byte) (RsaEncrypter, error) {
|
||||||
|
block, _ := pem.Decode(key)
|
||||||
|
if block == nil {
|
||||||
|
return nil, ErrPublicKey
|
||||||
|
}
|
||||||
|
|
||||||
|
pub, err := x509.ParsePKIXPublicKey(block.Bytes)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
switch pubKey := pub.(type) {
|
||||||
|
case *rsa.PublicKey:
|
||||||
|
// OAEP overhead: 2*hash_size + 2
|
||||||
|
hashSize := sha256.New().Size()
|
||||||
|
return &rsaOAEPEncrypter{
|
||||||
|
rsaBase: rsaBase{
|
||||||
|
bytesLimit: (pubKey.N.BitLen() >> 3) - 2*hashSize - 2,
|
||||||
|
},
|
||||||
|
publicKey: pubKey,
|
||||||
|
}, nil
|
||||||
|
default:
|
||||||
|
return nil, ErrNotRsaKey
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type rsaOAEPDecrypter struct {
|
||||||
|
rsaBase
|
||||||
|
privateKey *rsa.PrivateKey
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *rsaOAEPDecrypter) Decrypt(input []byte) ([]byte, error) {
|
||||||
|
return r.crypt(input, func(block []byte) ([]byte, error) {
|
||||||
|
return rsa.DecryptOAEP(sha256.New(), rand.Reader, r.privateKey, block, nil)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *rsaOAEPDecrypter) DecryptBase64(input string) ([]byte, error) {
|
||||||
|
if len(input) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
base64Decoded, err := base64.StdEncoding.DecodeString(input)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.Decrypt(base64Decoded)
|
||||||
|
}
|
||||||
|
|
||||||
|
type rsaOAEPEncrypter struct {
|
||||||
|
rsaBase
|
||||||
|
publicKey *rsa.PublicKey
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *rsaOAEPEncrypter) Encrypt(input []byte) ([]byte, error) {
|
||||||
|
return r.crypt(input, func(block []byte) ([]byte, error) {
|
||||||
|
return rsa.EncryptOAEP(sha256.New(), rand.Reader, r.publicKey, block, nil)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,7 +1,12 @@
|
|||||||
package codec
|
package codec
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/elliptic"
|
||||||
|
"crypto/rand"
|
||||||
|
"crypto/x509"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
|
"encoding/pem"
|
||||||
"os"
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
@@ -58,3 +63,78 @@ func TestBadPubKey(t *testing.T) {
|
|||||||
_, err := NewRsaEncrypter([]byte("foo"))
|
_, err := NewRsaEncrypter([]byte("foo"))
|
||||||
assert.Equal(t, ErrPublicKey, err)
|
assert.Equal(t, ErrPublicKey, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestOAEPCryption(t *testing.T) {
|
||||||
|
enc, err := NewRsaOAEPEncrypter([]byte(pubKey))
|
||||||
|
assert.Nil(t, err)
|
||||||
|
ret, err := enc.Encrypt([]byte(testBody))
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
file, err := fs.TempFilenameWithText(priKey)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(file)
|
||||||
|
dec, err := NewRsaOAEPDecrypter(file)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
actual, err := dec.Decrypt(ret)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, testBody, string(actual))
|
||||||
|
|
||||||
|
actual, err = dec.DecryptBase64(base64.StdEncoding.EncodeToString(ret))
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, testBody, string(actual))
|
||||||
|
|
||||||
|
// empty input
|
||||||
|
actual, err = dec.DecryptBase64("")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Nil(t, actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestOAEPBadKeys(t *testing.T) {
|
||||||
|
_, err := NewRsaOAEPEncrypter([]byte("bad"))
|
||||||
|
assert.Equal(t, ErrPublicKey, err)
|
||||||
|
|
||||||
|
_, err = NewRsaOAEPDecrypter("nonexistent")
|
||||||
|
assert.Error(t, err)
|
||||||
|
|
||||||
|
// valid PEM but invalid private key content
|
||||||
|
badPem, err := fs.TempFilenameWithText("-----BEGIN RSA PRIVATE KEY-----\nYmFk\n-----END RSA PRIVATE KEY-----")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(badPem)
|
||||||
|
_, err = NewRsaOAEPDecrypter(badPem)
|
||||||
|
assert.Error(t, err)
|
||||||
|
|
||||||
|
// not PEM content at all
|
||||||
|
notPem, err := fs.TempFilenameWithText("not a pem file")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(notPem)
|
||||||
|
_, err = NewRsaOAEPDecrypter(notPem)
|
||||||
|
assert.Equal(t, ErrPrivateKey, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestOAEPEncrypterParseError(t *testing.T) {
|
||||||
|
// valid PEM block but invalid public key content
|
||||||
|
badPub := []byte("-----BEGIN PUBLIC KEY-----\nYmFk\n-----END PUBLIC KEY-----")
|
||||||
|
_, err := NewRsaOAEPEncrypter(badPub)
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestOAEPEncrypterNonRsaKey(t *testing.T) {
|
||||||
|
ecKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
derBytes, err := x509.MarshalPKIXPublicKey(&ecKey.PublicKey)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
ecPem := pem.EncodeToMemory(&pem.Block{Type: "PUBLIC KEY", Bytes: derBytes})
|
||||||
|
_, err = NewRsaOAEPEncrypter(ecPem)
|
||||||
|
assert.Equal(t, ErrNotRsaKey, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestOAEPDecryptBase64Error(t *testing.T) {
|
||||||
|
file, err := fs.TempFilenameWithText(priKey)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(file)
|
||||||
|
dec, err := NewRsaOAEPDecrypter(file)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
_, err = dec.DecryptBase64("not-valid-base64!!!")
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|||||||
@@ -81,6 +81,10 @@ func (c *Cache) Del(key string) {
|
|||||||
delete(c.data, key)
|
delete(c.data, key)
|
||||||
c.lruCache.remove(key)
|
c.lruCache.remove(key)
|
||||||
c.lock.Unlock()
|
c.lock.Unlock()
|
||||||
|
|
||||||
|
// RemoveTimer is called outside the lock to avoid performance impact from this
|
||||||
|
// potentially time-consuming operation. Data integrity is maintained by lruCache,
|
||||||
|
// which will eventually evict any remaining entries when capacity is exceeded.
|
||||||
c.timingWheel.RemoveTimer(key)
|
c.timingWheel.RemoveTimer(key)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -164,6 +164,7 @@ func (tw *TimingWheel) Stop() {
|
|||||||
|
|
||||||
func (tw *TimingWheel) drainAll(fn func(key, value any)) {
|
func (tw *TimingWheel) drainAll(fn func(key, value any)) {
|
||||||
runner := threading.NewTaskRunner(drainWorkers)
|
runner := threading.NewTaskRunner(drainWorkers)
|
||||||
|
|
||||||
for _, slot := range tw.slots {
|
for _, slot := range tw.slots {
|
||||||
for e := slot.Front(); e != nil; {
|
for e := slot.Front(); e != nil; {
|
||||||
task := e.Value.(*timingEntry)
|
task := e.Value.(*timingEntry)
|
||||||
@@ -177,6 +178,8 @@ func (tw *TimingWheel) drainAll(fn func(key, value any)) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
runner.Wait()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (tw *TimingWheel) getPositionAndCircle(d time.Duration) (pos, circle int) {
|
func (tw *TimingWheel) getPositionAndCircle(d time.Duration) (pos, circle int) {
|
||||||
|
|||||||
@@ -629,6 +629,157 @@ func TestMoveAndRemoveTask(t *testing.T) {
|
|||||||
assert.Equal(t, 0, len(keys))
|
assert.Equal(t, 0, len(keys))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestTimingWheel_DrainClosureBug tests the closure capture bug in drainAll
|
||||||
|
// Issue: https://github.com/zeromicro/go-zero/issues/5314
|
||||||
|
func TestTimingWheel_DrainClosureBug(t *testing.T) {
|
||||||
|
ticker := timex.NewFakeTicker()
|
||||||
|
tw, _ := NewTimingWheelWithTicker(testStep, 10, func(k, v any) {}, ticker)
|
||||||
|
defer tw.Stop()
|
||||||
|
|
||||||
|
// Set multiple timers with different values
|
||||||
|
for i := 0; i < 10; i++ {
|
||||||
|
tw.SetTimer(i, i*10, testStep*5)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Give time for timers to be set
|
||||||
|
time.Sleep(time.Millisecond * 100)
|
||||||
|
|
||||||
|
var mu sync.Mutex
|
||||||
|
received := make(map[int]int)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
wg.Add(10)
|
||||||
|
|
||||||
|
tw.Drain(func(key, value any) {
|
||||||
|
mu.Lock()
|
||||||
|
defer mu.Unlock()
|
||||||
|
k := key.(int)
|
||||||
|
v := value.(int)
|
||||||
|
received[k] = v
|
||||||
|
wg.Done()
|
||||||
|
})
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
// Check if all values match their keys
|
||||||
|
for k, v := range received {
|
||||||
|
expected := k * 10
|
||||||
|
assert.Equal(t, expected, v, "key %d should have value %d, got %d", k, expected, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestTimingWheel_RunTasksClosureBug tests the closure capture bug in runTasks
|
||||||
|
// Issue: https://github.com/zeromicro/go-zero/issues/5314
|
||||||
|
func TestTimingWheel_RunTasksClosureBug(t *testing.T) {
|
||||||
|
ticker := timex.NewFakeTicker()
|
||||||
|
var mu sync.Mutex
|
||||||
|
executed := make(map[int]int)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
tw, _ := NewTimingWheelWithTicker(testStep, 10, func(k, v any) {
|
||||||
|
mu.Lock()
|
||||||
|
defer mu.Unlock()
|
||||||
|
key := k.(int)
|
||||||
|
val := v.(int)
|
||||||
|
executed[key] = val
|
||||||
|
wg.Done()
|
||||||
|
}, ticker)
|
||||||
|
defer tw.Stop()
|
||||||
|
|
||||||
|
// Set multiple timers that should fire in the same tick
|
||||||
|
count := 10
|
||||||
|
wg.Add(count)
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
tw.SetTimer(i, i*10, testStep)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Advance ticker to trigger tasks
|
||||||
|
ticker.Tick()
|
||||||
|
|
||||||
|
// Wait for execution with timeout
|
||||||
|
done := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
wg.Wait()
|
||||||
|
close(done)
|
||||||
|
}()
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-done:
|
||||||
|
// Success
|
||||||
|
case <-time.After(2 * time.Second):
|
||||||
|
t.Fatal("timeout waiting for tasks to execute")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify all tasks executed with correct values
|
||||||
|
assert.Equal(t, count, len(executed), "should have executed all tasks")
|
||||||
|
for k, v := range executed {
|
||||||
|
expected := k * 10
|
||||||
|
assert.Equal(t, expected, v, "key %d should have value %d, got %d", k, expected, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestTimingWheel_RunTasksRaceCondition tests for race conditions in runTasks
|
||||||
|
// This test specifically targets the loop variable capture bug
|
||||||
|
func TestTimingWheel_RunTasksRaceCondition(t *testing.T) {
|
||||||
|
// Run multiple times to increase likelihood of catching the bug
|
||||||
|
for attempt := 0; attempt < 10; attempt++ {
|
||||||
|
t.Run("", func(t *testing.T) {
|
||||||
|
ticker := timex.NewFakeTicker()
|
||||||
|
var mu sync.Mutex
|
||||||
|
keyValues := make(map[int][]int)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
tw, _ := NewTimingWheelWithTicker(testStep, 10, func(k, v any) {
|
||||||
|
// Add small delay to increase chance of race
|
||||||
|
time.Sleep(time.Microsecond)
|
||||||
|
mu.Lock()
|
||||||
|
defer mu.Unlock()
|
||||||
|
key := k.(int)
|
||||||
|
val := v.(int)
|
||||||
|
keyValues[key] = append(keyValues[key], val)
|
||||||
|
wg.Done()
|
||||||
|
}, ticker)
|
||||||
|
defer tw.Stop()
|
||||||
|
|
||||||
|
// Set many timers rapidly to increase chance of race
|
||||||
|
count := 50
|
||||||
|
wg.Add(count)
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
tw.SetTimer(i, i*100, testStep)
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker.Tick()
|
||||||
|
|
||||||
|
done := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
wg.Wait()
|
||||||
|
close(done)
|
||||||
|
}()
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-done:
|
||||||
|
case <-time.After(5 * time.Second):
|
||||||
|
t.Fatal("timeout waiting for tasks")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for duplicates or wrong values
|
||||||
|
wrongCount := 0
|
||||||
|
for key, values := range keyValues {
|
||||||
|
assert.Equal(t, 1, len(values), "key %d should only execute once, got %v", key, values)
|
||||||
|
if len(values) > 0 {
|
||||||
|
expected := key * 100
|
||||||
|
if values[0] != expected {
|
||||||
|
wrongCount++
|
||||||
|
t.Logf("BUG DETECTED: key %d should have value %d, got %d", key, expected, values[0])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if wrongCount > 0 {
|
||||||
|
t.Errorf("Found %d tasks with wrong values due to closure bug", wrongCount)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func BenchmarkTimingWheel(b *testing.B) {
|
func BenchmarkTimingWheel(b *testing.B) {
|
||||||
b.ReportAllocs()
|
b.ReportAllocs()
|
||||||
|
|
||||||
|
|||||||
@@ -21,10 +21,11 @@ const (
|
|||||||
var (
|
var (
|
||||||
fillDefaultUnmarshaler = mapping.NewUnmarshaler(jsonTagKey, mapping.WithDefault())
|
fillDefaultUnmarshaler = mapping.NewUnmarshaler(jsonTagKey, mapping.WithDefault())
|
||||||
loaders = map[string]func([]byte, any) error{
|
loaders = map[string]func([]byte, any) error{
|
||||||
".json": LoadFromJsonBytes,
|
".json": LoadFromJsonBytes,
|
||||||
".toml": LoadFromTomlBytes,
|
".json5": LoadFromJson5Bytes,
|
||||||
".yaml": LoadFromYamlBytes,
|
".toml": LoadFromTomlBytes,
|
||||||
".yml": LoadFromYamlBytes,
|
".yaml": LoadFromYamlBytes,
|
||||||
|
".yml": LoadFromYamlBytes,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -41,7 +42,7 @@ func FillDefault(v any) error {
|
|||||||
return fillDefaultUnmarshaler.Unmarshal(map[string]any{}, v)
|
return fillDefaultUnmarshaler.Unmarshal(map[string]any{}, v)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load loads config into v from file, .json, .yaml and .yml are acceptable.
|
// Load loads config into v from file, .json, .json5, .toml, .yaml and .yml are acceptable.
|
||||||
func Load(file string, v any, opts ...Option) error {
|
func Load(file string, v any, opts ...Option) error {
|
||||||
content, err := os.ReadFile(file)
|
content, err := os.ReadFile(file)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -62,14 +63,10 @@ func Load(file string, v any, opts ...Option) error {
|
|||||||
return loader([]byte(os.ExpandEnv(string(content))), v)
|
return loader([]byte(os.ExpandEnv(string(content))), v)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err = loader(content, v); err != nil {
|
return loader(content, v)
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return validate(v)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// LoadConfig loads config into v from file, .json, .yaml and .yml are acceptable.
|
// LoadConfig loads config into v from file, .json, .json5, .toml, .yaml and .yml are acceptable.
|
||||||
// Deprecated: use Load instead.
|
// Deprecated: use Load instead.
|
||||||
func LoadConfig(file string, v any, opts ...Option) error {
|
func LoadConfig(file string, v any, opts ...Option) error {
|
||||||
return Load(file, v, opts...)
|
return Load(file, v, opts...)
|
||||||
@@ -123,6 +120,16 @@ func LoadFromYamlBytes(content []byte, v any) error {
|
|||||||
return LoadFromJsonBytes(b, v)
|
return LoadFromJsonBytes(b, v)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// LoadFromJson5Bytes loads config into v from content json5 bytes.
|
||||||
|
func LoadFromJson5Bytes(content []byte, v any) error {
|
||||||
|
b, err := encoding.Json5ToJson(content)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return LoadFromJsonBytes(b, v)
|
||||||
|
}
|
||||||
|
|
||||||
// LoadConfigFromYamlBytes loads config into v from content yaml bytes.
|
// LoadConfigFromYamlBytes loads config into v from content yaml bytes.
|
||||||
// Deprecated: use LoadFromYamlBytes instead.
|
// Deprecated: use LoadFromYamlBytes instead.
|
||||||
func LoadConfigFromYamlBytes(content []byte, v any) error {
|
func LoadConfigFromYamlBytes(content []byte, v any) error {
|
||||||
@@ -368,5 +375,5 @@ func getFullName(parent, child string) string {
|
|||||||
return child
|
return child
|
||||||
}
|
}
|
||||||
|
|
||||||
return strings.Join([]string{parent, child}, ".")
|
return parent + "." + child
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -75,6 +75,160 @@ func TestLoadFromJsonBytesArray(t *testing.T) {
|
|||||||
assert.EqualValues(t, []string{"foo", "bar"}, expect)
|
assert.EqualValues(t, []string{"foo", "bar"}, expect)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestConfigJson5(t *testing.T) {
|
||||||
|
// JSON5 with comments, trailing commas, and unquoted keys
|
||||||
|
text := `{
|
||||||
|
// This is a comment
|
||||||
|
a: 'foo', // single quotes
|
||||||
|
b: 1,
|
||||||
|
c: "${FOO}",
|
||||||
|
d: "abcd!@#$112", // trailing comma
|
||||||
|
}`
|
||||||
|
t.Setenv("FOO", "2")
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, ".json5", text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var val struct {
|
||||||
|
A string `json:"a"`
|
||||||
|
B int `json:"b"`
|
||||||
|
C string `json:"c"`
|
||||||
|
D string `json:"d"`
|
||||||
|
}
|
||||||
|
MustLoad(tmpfile, &val)
|
||||||
|
assert.Equal(t, "foo", val.A)
|
||||||
|
assert.Equal(t, 1, val.B)
|
||||||
|
assert.Equal(t, "${FOO}", val.C)
|
||||||
|
assert.Equal(t, "abcd!@#$112", val.D)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigJsonStandardParser(t *testing.T) {
|
||||||
|
// Standard JSON uses standard JSON parser (not JSON5) for backward compatibility
|
||||||
|
text := `{
|
||||||
|
"a": "foo",
|
||||||
|
"b": 1,
|
||||||
|
"c": "${FOO}",
|
||||||
|
"d": "abcd!@#$112"
|
||||||
|
}`
|
||||||
|
t.Setenv("FOO", "2")
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, ".json", text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var val struct {
|
||||||
|
A string `json:"a"`
|
||||||
|
B int `json:"b"`
|
||||||
|
C string `json:"c"`
|
||||||
|
D string `json:"d"`
|
||||||
|
}
|
||||||
|
MustLoad(tmpfile, &val)
|
||||||
|
assert.Equal(t, "foo", val.A)
|
||||||
|
assert.Equal(t, 1, val.B)
|
||||||
|
assert.Equal(t, "${FOO}", val.C)
|
||||||
|
assert.Equal(t, "abcd!@#$112", val.D)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigJsonLargeIntegers(t *testing.T) {
|
||||||
|
// Test that .json files preserve large integer precision (backward compatibility)
|
||||||
|
text := `{
|
||||||
|
"id": 1234567890123456789,
|
||||||
|
"timestamp": 9223372036854775807
|
||||||
|
}`
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, ".json", text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var val struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
Timestamp int64 `json:"timestamp"`
|
||||||
|
}
|
||||||
|
MustLoad(tmpfile, &val)
|
||||||
|
assert.Equal(t, int64(1234567890123456789), val.ID)
|
||||||
|
assert.Equal(t, int64(9223372036854775807), val.Timestamp)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigJson5Env(t *testing.T) {
|
||||||
|
text := `{
|
||||||
|
// Comment with env variable
|
||||||
|
a: "foo",
|
||||||
|
b: 1,
|
||||||
|
c: "${FOO}",
|
||||||
|
d: "abcd!@#$a12 3",
|
||||||
|
}`
|
||||||
|
t.Setenv("FOO", "2")
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, ".json5", text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var val struct {
|
||||||
|
A string `json:"a"`
|
||||||
|
B int `json:"b"`
|
||||||
|
C string `json:"c"`
|
||||||
|
D string `json:"d"`
|
||||||
|
}
|
||||||
|
MustLoad(tmpfile, &val, UseEnv())
|
||||||
|
assert.Equal(t, "foo", val.A)
|
||||||
|
assert.Equal(t, 1, val.B)
|
||||||
|
assert.Equal(t, "2", val.C)
|
||||||
|
assert.Equal(t, "abcd!@# 3", val.D)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadFromJson5Bytes(t *testing.T) {
|
||||||
|
// Test JSON5 features: comments, trailing commas, single quotes, unquoted keys
|
||||||
|
input := []byte(`{
|
||||||
|
// This is a comment
|
||||||
|
users: [
|
||||||
|
{name: 'foo'}, // trailing comma
|
||||||
|
{Name: "bar"},
|
||||||
|
],
|
||||||
|
}`)
|
||||||
|
var val struct {
|
||||||
|
Users []struct {
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.NoError(t, LoadFromJson5Bytes(input, &val))
|
||||||
|
var expect []string
|
||||||
|
for _, user := range val.Users {
|
||||||
|
expect = append(expect, user.Name)
|
||||||
|
}
|
||||||
|
assert.EqualValues(t, []string{"foo", "bar"}, expect)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadFromJson5BytesError(t *testing.T) {
|
||||||
|
// Invalid JSON5 syntax
|
||||||
|
input := []byte(`{a: foo}`) // unquoted string value (invalid)
|
||||||
|
var val struct {
|
||||||
|
A string
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Error(t, LoadFromJson5Bytes(input, &val))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigJson5LargeIntegersLimitation(t *testing.T) {
|
||||||
|
// Document that JSON5 has precision limitations for large integers (>2^53)
|
||||||
|
// due to JavaScript number semantics. Users should use .json for configs with large IDs.
|
||||||
|
text := `{
|
||||||
|
// JSON5 converts numbers to float64, which loses precision for large integers
|
||||||
|
id: 1234567890123456789
|
||||||
|
}`
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, ".json5", text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var val struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// This will load; depending on the JSON5 implementation, large integers may lose precision.
|
||||||
|
// This test documents that behavior without requiring loss of precision as an invariant.
|
||||||
|
err = Load(tmpfile, &val)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
t.Logf("loaded JSON5 large integer id=%d (original 1234567890123456789)", val.ID)
|
||||||
|
}
|
||||||
|
|
||||||
func TestConfigToml(t *testing.T) {
|
func TestConfigToml(t *testing.T) {
|
||||||
text := `a = "foo"
|
text := `a = "foo"
|
||||||
b = 1
|
b = 1
|
||||||
@@ -1377,3 +1531,242 @@ func (m mockConfig) Validate() error {
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestGetFullName(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
parent string
|
||||||
|
child string
|
||||||
|
want string
|
||||||
|
}{
|
||||||
|
{"", "child", "child"},
|
||||||
|
{"parent", "child", "parent.child"},
|
||||||
|
{"a.b", "c", "a.b.c"},
|
||||||
|
{"root", "nested.field", "root.nested.field"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.parent+"."+tt.child, func(t *testing.T) {
|
||||||
|
got := getFullName(tt.parent, tt.child)
|
||||||
|
assert.Equal(t, tt.want, got)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// validatorConfig is a test config that implements Validate() for testing validation behavior
|
||||||
|
type validatorConfig struct {
|
||||||
|
Value int `json:"value"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v *validatorConfig) Validate() error {
|
||||||
|
if v.Value < 10 {
|
||||||
|
return errors.New("value must be >= 10")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestLoadValidation_WithoutEnv tests that validation is called correctly in normal loading path
|
||||||
|
func TestLoadValidation_WithoutEnv(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
extension string
|
||||||
|
content string
|
||||||
|
wantErr bool
|
||||||
|
errMsg string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "json valid value",
|
||||||
|
extension: ".json",
|
||||||
|
content: `{"value": 15}`,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json invalid value",
|
||||||
|
extension: ".json",
|
||||||
|
content: `{"value": 5}`,
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yaml valid value",
|
||||||
|
extension: ".yaml",
|
||||||
|
content: "value: 20\n",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yaml invalid value",
|
||||||
|
extension: ".yaml",
|
||||||
|
content: "value: 3\n",
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "toml valid value",
|
||||||
|
extension: ".toml",
|
||||||
|
content: "value = 100\n",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "toml invalid value",
|
||||||
|
extension: ".toml",
|
||||||
|
content: "value = 1\n",
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
tmpfile, err := createTempFile(t, tt.extension, tt.content)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg validatorConfig
|
||||||
|
err = Load(tmpfile, &cfg)
|
||||||
|
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), tt.errMsg)
|
||||||
|
} else {
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestLoadValidation_WithEnv tests that validation is called correctly with UseEnv() option
|
||||||
|
func TestLoadValidation_WithEnv(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
extension string
|
||||||
|
content string
|
||||||
|
envValue string
|
||||||
|
wantErr bool
|
||||||
|
errMsg string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "json valid value with env",
|
||||||
|
extension: ".json",
|
||||||
|
content: `{"value": ${TEST_VALUE}}`,
|
||||||
|
envValue: "25",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json invalid value with env",
|
||||||
|
extension: ".json",
|
||||||
|
content: `{"value": ${TEST_VALUE}}`,
|
||||||
|
envValue: "7",
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yaml valid value with env",
|
||||||
|
extension: ".yaml",
|
||||||
|
content: "value: ${TEST_VALUE}\n",
|
||||||
|
envValue: "50",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yaml invalid value with env",
|
||||||
|
extension: ".yaml",
|
||||||
|
content: "value: ${TEST_VALUE}\n",
|
||||||
|
envValue: "2",
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "toml valid value with env",
|
||||||
|
extension: ".toml",
|
||||||
|
content: "value = ${TEST_VALUE}\n",
|
||||||
|
envValue: "99",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "toml invalid value with env",
|
||||||
|
extension: ".toml",
|
||||||
|
content: "value = ${TEST_VALUE}\n",
|
||||||
|
envValue: "8",
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "value must be >= 10",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
t.Setenv("TEST_VALUE", tt.envValue)
|
||||||
|
|
||||||
|
tmpfile, err := createTempFile(t, tt.extension, tt.content)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg validatorConfig
|
||||||
|
err = Load(tmpfile, &cfg, UseEnv())
|
||||||
|
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), tt.errMsg)
|
||||||
|
} else {
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestLoadValidation_Consistency verifies validation behavior is consistent between paths
|
||||||
|
func TestLoadValidation_Consistency(t *testing.T) {
|
||||||
|
// Test that both paths (with and without UseEnv) produce the same validation results
|
||||||
|
const validValue = 15
|
||||||
|
|
||||||
|
formats := []struct {
|
||||||
|
ext string
|
||||||
|
invalid string
|
||||||
|
valid string
|
||||||
|
}{
|
||||||
|
{".json", `{"value": 5}`, `{"value": 15}`},
|
||||||
|
{".yaml", "value: 5\n", "value: 15\n"},
|
||||||
|
{".toml", "value = 5\n", "value = 15\n"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, format := range formats {
|
||||||
|
t.Run("invalid_"+format.ext, func(t *testing.T) {
|
||||||
|
// Test without UseEnv()
|
||||||
|
tmpfile1, err := createTempFile(t, format.ext, format.invalid)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg1 validatorConfig
|
||||||
|
err1 := Load(tmpfile1, &cfg1)
|
||||||
|
|
||||||
|
// Test with UseEnv()
|
||||||
|
tmpfile2, err := createTempFile(t, format.ext, format.invalid)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg2 validatorConfig
|
||||||
|
err2 := Load(tmpfile2, &cfg2, UseEnv())
|
||||||
|
|
||||||
|
// Both should fail validation
|
||||||
|
assert.Error(t, err1, "validation should fail without UseEnv()")
|
||||||
|
assert.Error(t, err2, "validation should fail with UseEnv()")
|
||||||
|
assert.Contains(t, err1.Error(), "value must be >= 10")
|
||||||
|
assert.Contains(t, err2.Error(), "value must be >= 10")
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("valid_"+format.ext, func(t *testing.T) {
|
||||||
|
// Test without UseEnv()
|
||||||
|
tmpfile1, err := createTempFile(t, format.ext, format.valid)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg1 validatorConfig
|
||||||
|
err1 := Load(tmpfile1, &cfg1)
|
||||||
|
|
||||||
|
// Test with UseEnv()
|
||||||
|
tmpfile2, err := createTempFile(t, format.ext, format.valid)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
var cfg2 validatorConfig
|
||||||
|
err2 := Load(tmpfile2, &cfg2, UseEnv())
|
||||||
|
|
||||||
|
// Both should pass validation
|
||||||
|
assert.NoError(t, err1, "validation should pass without UseEnv()")
|
||||||
|
assert.NoError(t, err2, "validation should pass with UseEnv()")
|
||||||
|
assert.Equal(t, validValue, cfg1.Value)
|
||||||
|
assert.Equal(t, validValue, cfg2.Value)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ func LoadProperties(filename string, opts ...Option) (Properties, error) {
|
|||||||
|
|
||||||
raw := make(map[string]string)
|
raw := make(map[string]string)
|
||||||
for i := range lines {
|
for i := range lines {
|
||||||
pair := strings.Split(lines[i], "=")
|
pair := strings.SplitN(lines[i], "=", 2)
|
||||||
if len(pair) != 2 {
|
if len(pair) != 2 {
|
||||||
// invalid property format
|
// invalid property format
|
||||||
return nil, &PropertyError{
|
return nil, &PropertyError{
|
||||||
|
|||||||
@@ -92,3 +92,70 @@ func TestLoadBadFile(t *testing.T) {
|
|||||||
_, err := LoadProperties("nosuchfile")
|
_, err := LoadProperties("nosuchfile")
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestProperties_valueWithEqualSymbols(t *testing.T) {
|
||||||
|
text := `# test with equal symbols in value
|
||||||
|
db.url=postgres://localhost:5432/db?param=value
|
||||||
|
math.equation=a=b=c
|
||||||
|
base64.data=SGVsbG8=World=Test=
|
||||||
|
url.with.params=http://example.com?foo=bar&baz=qux
|
||||||
|
empty.value=
|
||||||
|
key.with.space = value = with = equals`
|
||||||
|
tmpfile, err := fs.TempFilenameWithText(text)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(tmpfile)
|
||||||
|
|
||||||
|
props, err := LoadProperties(tmpfile)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "postgres://localhost:5432/db?param=value", props.GetString("db.url"))
|
||||||
|
assert.Equal(t, "a=b=c", props.GetString("math.equation"))
|
||||||
|
assert.Equal(t, "SGVsbG8=World=Test=", props.GetString("base64.data"))
|
||||||
|
assert.Equal(t, "http://example.com?foo=bar&baz=qux", props.GetString("url.with.params"))
|
||||||
|
assert.Equal(t, "", props.GetString("empty.value"))
|
||||||
|
assert.Equal(t, "value = with = equals", props.GetString("key.with.space"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProperties_edgeCases(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
content string
|
||||||
|
wantErr bool
|
||||||
|
errMsg string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "no equal sign",
|
||||||
|
content: "invalid line without equal",
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "only equal sign",
|
||||||
|
content: "=",
|
||||||
|
wantErr: false, // "=" 会被解析为空 key 和空 value,len(pair) == 2,是合法的
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty key",
|
||||||
|
content: "=value",
|
||||||
|
wantErr: false, // 空 key 也会被 trim,但 len(pair) == 2 所以不会报错
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "equal at end",
|
||||||
|
content: "key.name=",
|
||||||
|
wantErr: false, // 空 value 是合法的
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
tmpfile, err := fs.TempFilenameWithText(tt.content)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
defer os.Remove(tmpfile)
|
||||||
|
|
||||||
|
_, err = LoadProperties(tmpfile)
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.NotNil(t, err, "expected error for case: %s", tt.name)
|
||||||
|
} else {
|
||||||
|
assert.Nil(t, err, "unexpected error for case: %s", tt.name)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,9 @@
|
|||||||
package subscriber
|
package subscriber
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/discov"
|
"github.com/zeromicro/go-zero/core/discov"
|
||||||
"github.com/zeromicro/go-zero/core/logx"
|
"github.com/zeromicro/go-zero/core/logx"
|
||||||
)
|
)
|
||||||
@@ -37,6 +40,7 @@ func NewEtcdSubscriber(conf EtcdConf) (Subscriber, error) {
|
|||||||
func buildSubOptions(conf EtcdConf) []discov.SubOption {
|
func buildSubOptions(conf EtcdConf) []discov.SubOption {
|
||||||
opts := []discov.SubOption{
|
opts := []discov.SubOption{
|
||||||
discov.WithExactMatch(),
|
discov.WithExactMatch(),
|
||||||
|
discov.WithContainer(newContainer()),
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(conf.User) > 0 {
|
if len(conf.User) > 0 {
|
||||||
@@ -65,3 +69,47 @@ func (s *etcdSubscriber) Value() (string, error) {
|
|||||||
|
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type container struct {
|
||||||
|
value atomic.Value
|
||||||
|
listeners []func()
|
||||||
|
lock sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func newContainer() *container {
|
||||||
|
return &container{}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *container) OnAdd(kv discov.KV) {
|
||||||
|
c.value.Store([]string{kv.Val})
|
||||||
|
c.notifyChange()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *container) OnDelete(_ discov.KV) {
|
||||||
|
c.value.Store([]string(nil))
|
||||||
|
c.notifyChange()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *container) AddListener(listener func()) {
|
||||||
|
c.lock.Lock()
|
||||||
|
c.listeners = append(c.listeners, listener)
|
||||||
|
c.lock.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *container) GetValues() []string {
|
||||||
|
if vals, ok := c.value.Load().([]string); ok {
|
||||||
|
return vals
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string(nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *container) notifyChange() {
|
||||||
|
c.lock.Lock()
|
||||||
|
listeners := append(([]func())(nil), c.listeners...)
|
||||||
|
c.lock.Unlock()
|
||||||
|
|
||||||
|
for _, listener := range listeners {
|
||||||
|
listener()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
186
core/configcenter/subscriber/etcd_test.go
Normal file
186
core/configcenter/subscriber/etcd_test.go
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
package subscriber
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/zeromicro/go-zero/core/discov"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
actionAdd = iota
|
||||||
|
actionDel
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestConfigCenterContainer(t *testing.T) {
|
||||||
|
type action struct {
|
||||||
|
act int
|
||||||
|
key string
|
||||||
|
val string
|
||||||
|
}
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
do []action
|
||||||
|
expect []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "add one",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string{
|
||||||
|
"a",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "add two",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "second",
|
||||||
|
val: "b",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string{
|
||||||
|
"b",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "add two, delete one",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "second",
|
||||||
|
val: "b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionDel,
|
||||||
|
key: "first",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string(nil),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "add two, delete two",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "second",
|
||||||
|
val: "b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionDel,
|
||||||
|
key: "first",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionDel,
|
||||||
|
key: "second",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string(nil),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "add two, dup values",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "second",
|
||||||
|
val: "b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "third",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string{"a"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "add three, dup values, delete two, add one",
|
||||||
|
do: []action{
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "first",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "second",
|
||||||
|
val: "b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "third",
|
||||||
|
val: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionDel,
|
||||||
|
key: "first",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionDel,
|
||||||
|
key: "second",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
act: actionAdd,
|
||||||
|
key: "forth",
|
||||||
|
val: "c",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expect: []string{"c"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
var changed bool
|
||||||
|
c := newContainer()
|
||||||
|
c.AddListener(func() {
|
||||||
|
changed = true
|
||||||
|
})
|
||||||
|
assert.Nil(t, c.GetValues())
|
||||||
|
assert.False(t, changed)
|
||||||
|
|
||||||
|
for _, order := range test.do {
|
||||||
|
if order.act == actionAdd {
|
||||||
|
c.OnAdd(discov.KV{
|
||||||
|
Key: order.key,
|
||||||
|
Val: order.val,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
c.OnDelete(discov.KV{
|
||||||
|
Key: order.key,
|
||||||
|
Val: order.val,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.True(t, changed)
|
||||||
|
assert.ElementsMatch(t, test.expect, c.GetValues())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -386,8 +386,9 @@ func (c *cluster) watch(cli EtcdClient, key watchKey, rev int64) {
|
|||||||
rev = c.load(cli, key)
|
rev = c.load(cli, key)
|
||||||
}
|
}
|
||||||
|
|
||||||
// log the error and retry
|
// log the error and retry with cooldown to prevent CPU/disk exhaustion
|
||||||
logc.Error(cli.Ctx(), err)
|
logc.Error(cli.Ctx(), err)
|
||||||
|
time.Sleep(coolDownUnstable.AroundDuration(coolDownInterval))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -432,16 +433,16 @@ func (c *cluster) setupWatch(cli EtcdClient, key watchKey, rev int64) (context.C
|
|||||||
}
|
}
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(cli.Ctx())
|
ctx, cancel := context.WithCancel(cli.Ctx())
|
||||||
|
|
||||||
|
c.lock.Lock()
|
||||||
if watcher, ok := c.watchers[key]; ok {
|
if watcher, ok := c.watchers[key]; ok {
|
||||||
watcher.cancel = cancel
|
watcher.cancel = cancel
|
||||||
} else {
|
} else {
|
||||||
val := newWatchValue()
|
val := newWatchValue()
|
||||||
val.cancel = cancel
|
val.cancel = cancel
|
||||||
|
|
||||||
c.lock.Lock()
|
|
||||||
c.watchers[key] = val
|
c.watchers[key] = val
|
||||||
c.lock.Unlock()
|
|
||||||
}
|
}
|
||||||
|
c.lock.Unlock()
|
||||||
|
|
||||||
rch = cli.Watch(clientv3.WithRequireLeader(ctx), wkey, ops...)
|
rch = cli.Watch(clientv3.WithRequireLeader(ctx), wkey, ops...)
|
||||||
|
|
||||||
|
|||||||
@@ -477,6 +477,72 @@ func TestRegistry_Unmonitor(t *testing.T) {
|
|||||||
assert.Nil(t, watchVals)
|
assert.Nil(t, watchVals)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestCluster_ConcurrentMonitor tests the race condition fix in setupWatch
|
||||||
|
// This test specifically covers the scenario from issue #5394 where:
|
||||||
|
// - addListener() writes to the watchers map (with lock)
|
||||||
|
// - setupWatch() reads from the watchers map (now with lock after fix)
|
||||||
|
// Running with -race flag will detect any race conditions
|
||||||
|
func TestCluster_ConcurrentMonitor(t *testing.T) {
|
||||||
|
ctrl := gomock.NewController(t)
|
||||||
|
defer ctrl.Finish()
|
||||||
|
|
||||||
|
cli := NewMockEtcdClient(ctrl)
|
||||||
|
cli.EXPECT().Ctx().Return(context.Background()).AnyTimes()
|
||||||
|
cli.EXPECT().Watch(gomock.Any(), gomock.Any(), gomock.Any()).Return(make(chan clientv3.WatchResponse)).AnyTimes()
|
||||||
|
|
||||||
|
c := &cluster{
|
||||||
|
endpoints: []string{"localhost:2379"},
|
||||||
|
key: "test-cluster",
|
||||||
|
watchers: make(map[watchKey]*watchValue),
|
||||||
|
watchGroup: threading.NewRoutineGroup(),
|
||||||
|
done: make(chan lang.PlaceholderType),
|
||||||
|
lock: sync.RWMutex{},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Spawn multiple concurrent operations that simulate the race condition:
|
||||||
|
// - Some goroutines call addListener (write to map)
|
||||||
|
// - Some goroutines call setupWatch (read from map)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
numGoroutines := 20
|
||||||
|
wg.Add(numGoroutines)
|
||||||
|
|
||||||
|
keys := []watchKey{
|
||||||
|
{key: "key-0", exactMatch: false},
|
||||||
|
{key: "key-1", exactMatch: false},
|
||||||
|
{key: "key-2", exactMatch: false},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < numGoroutines; i++ {
|
||||||
|
idx := i
|
||||||
|
go func() {
|
||||||
|
defer wg.Done()
|
||||||
|
key := keys[idx%len(keys)]
|
||||||
|
|
||||||
|
if idx%2 == 0 {
|
||||||
|
// Half the goroutines add listeners (write operation)
|
||||||
|
c.addListener(key, &mockListener{})
|
||||||
|
} else {
|
||||||
|
// Half the goroutines setup watches (read operation)
|
||||||
|
_, _ = c.setupWatch(cli, key, 0)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait for all goroutines to complete
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
// Verify that watchers were correctly added
|
||||||
|
c.lock.RLock()
|
||||||
|
assert.True(t, len(c.watchers) > 0, "watchers should be added")
|
||||||
|
for _, watcher := range c.watchers {
|
||||||
|
assert.NotNil(t, watcher, "watcher should not be nil")
|
||||||
|
}
|
||||||
|
c.lock.RUnlock()
|
||||||
|
|
||||||
|
// Clean up
|
||||||
|
close(c.done)
|
||||||
|
}
|
||||||
|
|
||||||
type mockListener struct {
|
type mockListener struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -19,8 +19,9 @@ type (
|
|||||||
exclusive bool
|
exclusive bool
|
||||||
key string
|
key string
|
||||||
exactMatch bool
|
exactMatch bool
|
||||||
items *container
|
items Container
|
||||||
}
|
}
|
||||||
|
KV = internal.KV
|
||||||
)
|
)
|
||||||
|
|
||||||
// NewSubscriber returns a Subscriber.
|
// NewSubscriber returns a Subscriber.
|
||||||
@@ -35,7 +36,9 @@ func NewSubscriber(endpoints []string, key string, opts ...SubOption) (*Subscrib
|
|||||||
for _, opt := range opts {
|
for _, opt := range opts {
|
||||||
opt(sub)
|
opt(sub)
|
||||||
}
|
}
|
||||||
sub.items = newContainer(sub.exclusive)
|
if sub.items == nil {
|
||||||
|
sub.items = newContainer(sub.exclusive)
|
||||||
|
}
|
||||||
|
|
||||||
if err := internal.GetRegistry().Monitor(endpoints, key, sub.exactMatch, sub.items); err != nil {
|
if err := internal.GetRegistry().Monitor(endpoints, key, sub.exactMatch, sub.items); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -46,7 +49,7 @@ func NewSubscriber(endpoints []string, key string, opts ...SubOption) (*Subscrib
|
|||||||
|
|
||||||
// AddListener adds listener to s.
|
// AddListener adds listener to s.
|
||||||
func (s *Subscriber) AddListener(listener func()) {
|
func (s *Subscriber) AddListener(listener func()) {
|
||||||
s.items.addListener(listener)
|
s.items.AddListener(listener)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close closes the subscriber.
|
// Close closes the subscriber.
|
||||||
@@ -56,7 +59,7 @@ func (s *Subscriber) Close() {
|
|||||||
|
|
||||||
// Values returns all the subscription values.
|
// Values returns all the subscription values.
|
||||||
func (s *Subscriber) Values() []string {
|
func (s *Subscriber) Values() []string {
|
||||||
return s.items.getValues()
|
return s.items.GetValues()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Exclusive means that key value can only be 1:1,
|
// Exclusive means that key value can only be 1:1,
|
||||||
@@ -88,16 +91,32 @@ func WithSubEtcdTLS(certFile, certKeyFile, caFile string, insecureSkipVerify boo
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type container struct {
|
// WithContainer provides a custom container to the subscriber.
|
||||||
exclusive bool
|
func WithContainer(container Container) SubOption {
|
||||||
values map[string][]string
|
return func(sub *Subscriber) {
|
||||||
mapping map[string]string
|
sub.items = container
|
||||||
snapshot atomic.Value
|
}
|
||||||
dirty *syncx.AtomicBool
|
|
||||||
listeners []func()
|
|
||||||
lock sync.Mutex
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type (
|
||||||
|
Container interface {
|
||||||
|
OnAdd(kv internal.KV)
|
||||||
|
OnDelete(kv internal.KV)
|
||||||
|
AddListener(listener func())
|
||||||
|
GetValues() []string
|
||||||
|
}
|
||||||
|
|
||||||
|
container struct {
|
||||||
|
exclusive bool
|
||||||
|
values map[string][]string
|
||||||
|
mapping map[string]string
|
||||||
|
snapshot atomic.Value
|
||||||
|
dirty *syncx.AtomicBool
|
||||||
|
listeners []func()
|
||||||
|
lock sync.Mutex
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
func newContainer(exclusive bool) *container {
|
func newContainer(exclusive bool) *container {
|
||||||
return &container{
|
return &container{
|
||||||
exclusive: exclusive,
|
exclusive: exclusive,
|
||||||
@@ -141,7 +160,7 @@ func (c *container) addKv(key, value string) ([]string, bool) {
|
|||||||
return nil, false
|
return nil, false
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *container) addListener(listener func()) {
|
func (c *container) AddListener(listener func()) {
|
||||||
c.lock.Lock()
|
c.lock.Lock()
|
||||||
c.listeners = append(c.listeners, listener)
|
c.listeners = append(c.listeners, listener)
|
||||||
c.lock.Unlock()
|
c.lock.Unlock()
|
||||||
@@ -170,7 +189,7 @@ func (c *container) doRemoveKey(key string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *container) getValues() []string {
|
func (c *container) GetValues() []string {
|
||||||
if !c.dirty.True() {
|
if !c.dirty.True() {
|
||||||
return c.snapshot.Load().([]string)
|
return c.snapshot.Load().([]string)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -171,10 +171,10 @@ func TestContainer(t *testing.T) {
|
|||||||
t.Run(test.name, func(t *testing.T) {
|
t.Run(test.name, func(t *testing.T) {
|
||||||
var changed bool
|
var changed bool
|
||||||
c := newContainer(exclusive)
|
c := newContainer(exclusive)
|
||||||
c.addListener(func() {
|
c.AddListener(func() {
|
||||||
changed = true
|
changed = true
|
||||||
})
|
})
|
||||||
assert.Nil(t, c.getValues())
|
assert.Nil(t, c.GetValues())
|
||||||
assert.False(t, changed)
|
assert.False(t, changed)
|
||||||
|
|
||||||
for _, order := range test.do {
|
for _, order := range test.do {
|
||||||
@@ -193,9 +193,9 @@ func TestContainer(t *testing.T) {
|
|||||||
|
|
||||||
assert.True(t, changed)
|
assert.True(t, changed)
|
||||||
assert.True(t, c.dirty.True())
|
assert.True(t, c.dirty.True())
|
||||||
assert.ElementsMatch(t, test.expect, c.getValues())
|
assert.ElementsMatch(t, test.expect, c.GetValues())
|
||||||
assert.False(t, c.dirty.True())
|
assert.False(t, c.dirty.True())
|
||||||
assert.ElementsMatch(t, test.expect, c.getValues())
|
assert.ElementsMatch(t, test.expect, c.GetValues())
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -204,12 +204,14 @@ func TestContainer(t *testing.T) {
|
|||||||
func TestSubscriber(t *testing.T) {
|
func TestSubscriber(t *testing.T) {
|
||||||
sub := new(Subscriber)
|
sub := new(Subscriber)
|
||||||
Exclusive()(sub)
|
Exclusive()(sub)
|
||||||
sub.items = newContainer(sub.exclusive)
|
c := newContainer(sub.exclusive)
|
||||||
|
WithContainer(c)(sub)
|
||||||
|
sub.items = c
|
||||||
var count int32
|
var count int32
|
||||||
sub.AddListener(func() {
|
sub.AddListener(func() {
|
||||||
atomic.AddInt32(&count, 1)
|
atomic.AddInt32(&count, 1)
|
||||||
})
|
})
|
||||||
sub.items.notifyChange()
|
c.notifyChange()
|
||||||
assert.Empty(t, sub.Values())
|
assert.Empty(t, sub.Values())
|
||||||
assert.Equal(t, int32(1), atomic.LoadInt32(&count))
|
assert.Equal(t, int32(1), atomic.LoadInt32(&count))
|
||||||
}
|
}
|
||||||
@@ -229,12 +231,13 @@ func TestWithSubEtcdAccount(t *testing.T) {
|
|||||||
func TestWithExactMatch(t *testing.T) {
|
func TestWithExactMatch(t *testing.T) {
|
||||||
sub := new(Subscriber)
|
sub := new(Subscriber)
|
||||||
WithExactMatch()(sub)
|
WithExactMatch()(sub)
|
||||||
sub.items = newContainer(sub.exclusive)
|
c := newContainer(sub.exclusive)
|
||||||
|
sub.items = c
|
||||||
var count int32
|
var count int32
|
||||||
sub.AddListener(func() {
|
sub.AddListener(func() {
|
||||||
atomic.AddInt32(&count, 1)
|
atomic.AddInt32(&count, 1)
|
||||||
})
|
})
|
||||||
sub.items.notifyChange()
|
c.notifyChange()
|
||||||
assert.Empty(t, sub.Values())
|
assert.Empty(t, sub.Values())
|
||||||
assert.Equal(t, int32(1), atomic.LoadInt32(&count))
|
assert.Equal(t, int32(1), atomic.LoadInt32(&count))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -168,7 +168,7 @@ func (s Stream) Count() (count int) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Distinct removes the duplicated items base on the given KeyFunc.
|
// Distinct removes the duplicated items based on the given KeyFunc.
|
||||||
func (s Stream) Distinct(fn KeyFunc) Stream {
|
func (s Stream) Distinct(fn KeyFunc) Stream {
|
||||||
source := make(chan any)
|
source := make(chan any)
|
||||||
|
|
||||||
@@ -459,7 +459,7 @@ func (s Stream) Tail(n int64) Stream {
|
|||||||
return Range(source)
|
return Range(source)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Walk lets the callers handle each item, the caller may write zero, one or more items base on the given item.
|
// Walk lets the callers handle each item, the caller may write zero, one or more items based on the given item.
|
||||||
func (s Stream) Walk(fn WalkFunc, opts ...Option) Stream {
|
func (s Stream) Walk(fn WalkFunc, opts ...Option) Stream {
|
||||||
option := buildOptions(opts...)
|
option := buildOptions(opts...)
|
||||||
if option.unlimitedWorkers {
|
if option.unlimitedWorkers {
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
package fx
|
package fx
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"math/rand"
|
"math/rand"
|
||||||
"reflect"
|
"reflect"
|
||||||
"runtime"
|
"runtime"
|
||||||
@@ -13,6 +11,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/zeromicro/go-zero/core/logx/logtest"
|
||||||
"github.com/zeromicro/go-zero/core/stringx"
|
"github.com/zeromicro/go-zero/core/stringx"
|
||||||
"go.uber.org/goleak"
|
"go.uber.org/goleak"
|
||||||
)
|
)
|
||||||
@@ -238,7 +237,7 @@ func TestLast(t *testing.T) {
|
|||||||
|
|
||||||
func TestMap(t *testing.T) {
|
func TestMap(t *testing.T) {
|
||||||
runCheckedTest(t, func(t *testing.T) {
|
runCheckedTest(t, func(t *testing.T) {
|
||||||
log.SetOutput(io.Discard)
|
logtest.Discard(t)
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
mapper MapFunc
|
mapper MapFunc
|
||||||
|
|||||||
@@ -96,7 +96,7 @@ func (h *ConsistentHash) AddWithWeight(node any, weight int) {
|
|||||||
h.AddWithReplicas(node, replicas)
|
h.AddWithReplicas(node, replicas)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get returns the corresponding node from h base on the given v.
|
// Get returns the corresponding node from h based on the given v.
|
||||||
func (h *ConsistentHash) Get(v any) (any, bool) {
|
func (h *ConsistentHash) Get(v any) (any, bool) {
|
||||||
h.lock.RLock()
|
h.lock.RLock()
|
||||||
defer h.lock.RUnlock()
|
defer h.lock.RUnlock()
|
||||||
|
|||||||
@@ -25,6 +25,29 @@ func TestMd5Hex(t *testing.T) {
|
|||||||
assert.Equal(t, md5Digest, actual)
|
assert.Equal(t, md5Digest, actual)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHash(t *testing.T) {
|
||||||
|
result := Hash([]byte(text))
|
||||||
|
assert.NotEqual(t, uint64(0), result)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHash_Deterministic(t *testing.T) {
|
||||||
|
data := []byte("consistent-hash-test")
|
||||||
|
first := Hash(data)
|
||||||
|
second := Hash(data)
|
||||||
|
assert.Equal(t, first, second)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHash_Empty(t *testing.T) {
|
||||||
|
// Hash should not panic on empty input.
|
||||||
|
result := Hash([]byte{})
|
||||||
|
_ = result
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMd5Hex_Empty(t *testing.T) {
|
||||||
|
result := Md5Hex([]byte{})
|
||||||
|
assert.Equal(t, 32, len(result))
|
||||||
|
}
|
||||||
|
|
||||||
func BenchmarkHashFnv(b *testing.B) {
|
func BenchmarkHashFnv(b *testing.B) {
|
||||||
for i := 0; i < b.N; i++ {
|
for i := 0; i < b.N; i++ {
|
||||||
h := fnv.New32()
|
h := fnv.New32()
|
||||||
|
|||||||
@@ -1,47 +1,70 @@
|
|||||||
package logx
|
package logx
|
||||||
|
|
||||||
// A LogConf is a logging config.
|
type (
|
||||||
type LogConf struct {
|
// A LogConf is a logging config.
|
||||||
// ServiceName represents the service name.
|
LogConf struct {
|
||||||
ServiceName string `json:",optional"`
|
// ServiceName represents the service name.
|
||||||
// Mode represents the logging mode, default is `console`.
|
ServiceName string `json:",optional"`
|
||||||
// console: log to console.
|
// Mode represents the logging mode, default is `console`.
|
||||||
// file: log to file.
|
// console: log to console.
|
||||||
// volume: used in k8s, prepend the hostname to the log file name.
|
// file: log to file.
|
||||||
Mode string `json:",default=console,options=[console,file,volume]"`
|
// volume: used in k8s, prepend the hostname to the log file name.
|
||||||
// Encoding represents the encoding type, default is `json`.
|
Mode string `json:",default=console,options=[console,file,volume]"`
|
||||||
// json: json encoding.
|
// Encoding represents the encoding type, default is `json`.
|
||||||
// plain: plain text encoding, typically used in development.
|
// json: json encoding.
|
||||||
Encoding string `json:",default=json,options=[json,plain]"`
|
// plain: plain text encoding, typically used in development.
|
||||||
// TimeFormat represents the time format, default is `2006-01-02T15:04:05.000Z07:00`.
|
Encoding string `json:",default=json,options=[json,plain]"`
|
||||||
TimeFormat string `json:",optional"`
|
// TimeFormat represents the time format, default is `2006-01-02T15:04:05.000Z07:00`.
|
||||||
// Path represents the log file path, default is `logs`.
|
TimeFormat string `json:",optional"`
|
||||||
Path string `json:",default=logs"`
|
// Path represents the log file path, default is `logs`.
|
||||||
// Level represents the log level, default is `info`.
|
Path string `json:",default=logs"`
|
||||||
Level string `json:",default=info,options=[debug,info,error,severe]"`
|
// Level represents the log level, default is `info`.
|
||||||
// MaxContentLength represents the max content bytes, default is no limit.
|
Level string `json:",default=info,options=[debug,info,error,severe]"`
|
||||||
MaxContentLength uint32 `json:",optional"`
|
// MaxContentLength represents the max content bytes, default is no limit.
|
||||||
// Compress represents whether to compress the log file, default is `false`.
|
MaxContentLength uint32 `json:",optional"`
|
||||||
Compress bool `json:",optional"`
|
// Compress represents whether to compress the log file, default is `false`.
|
||||||
// Stat represents whether to log statistics, default is `true`.
|
Compress bool `json:",optional"`
|
||||||
Stat bool `json:",default=true"`
|
// Stat represents whether to log statistics, default is `true`.
|
||||||
// KeepDays represents how many days the log files will be kept. Default to keep all files.
|
Stat bool `json:",default=true"`
|
||||||
// Only take effect when Mode is `file` or `volume`, both work when Rotation is `daily` or `size`.
|
// KeepDays represents how many days the log files will be kept. Default to keep all files.
|
||||||
KeepDays int `json:",optional"`
|
// Only take effect when Mode is `file` or `volume`, both work when Rotation is `daily` or `size`.
|
||||||
// StackCooldownMillis represents the cooldown time for stack logging, default is 100ms.
|
KeepDays int `json:",optional"`
|
||||||
StackCooldownMillis int `json:",default=100"`
|
// StackCooldownMillis represents the cooldown time for stack logging, default is 100ms.
|
||||||
// MaxBackups represents how many backup log files will be kept. 0 means all files will be kept forever.
|
StackCooldownMillis int `json:",default=100"`
|
||||||
// Only take effect when RotationRuleType is `size`.
|
// MaxBackups represents how many backup log files will be kept. 0 means all files will be kept forever.
|
||||||
// Even though `MaxBackups` sets 0, log files will still be removed
|
// Only take effect when RotationRuleType is `size`.
|
||||||
// if the `KeepDays` limitation is reached.
|
// Even though `MaxBackups` sets 0, log files will still be removed
|
||||||
MaxBackups int `json:",default=0"`
|
// if the `KeepDays` limitation is reached.
|
||||||
// MaxSize represents how much space the writing log file takes up. 0 means no limit. The unit is `MB`.
|
MaxBackups int `json:",default=0"`
|
||||||
// Only take effect when RotationRuleType is `size`
|
// MaxSize represents how much space the writing log file takes up. 0 means no limit. The unit is `MB`.
|
||||||
MaxSize int `json:",default=0"`
|
// Only take effect when RotationRuleType is `size`
|
||||||
// Rotation represents the type of log rotation rule. Default is `daily`.
|
MaxSize int `json:",default=0"`
|
||||||
// daily: daily rotation.
|
// Rotation represents the type of log rotation rule. Default is `daily`.
|
||||||
// size: size limited rotation.
|
// daily: daily rotation.
|
||||||
Rotation string `json:",default=daily,options=[daily,size]"`
|
// size: size limited rotation.
|
||||||
// FileTimeFormat represents the time format for file name, default is `2006-01-02T15:04:05.000Z07:00`.
|
Rotation string `json:",default=daily,options=[daily,size]"`
|
||||||
FileTimeFormat string `json:",optional"`
|
// FileTimeFormat represents the time format for file name, default is `2006-01-02T15:04:05.000Z07:00`.
|
||||||
}
|
FileTimeFormat string `json:",optional"`
|
||||||
|
// FieldKeys represents the field keys.
|
||||||
|
FieldKeys fieldKeyConf `json:",optional"`
|
||||||
|
}
|
||||||
|
|
||||||
|
fieldKeyConf struct {
|
||||||
|
// CallerKey represents the caller key.
|
||||||
|
CallerKey string `json:",default=caller"`
|
||||||
|
// ContentKey represents the content key.
|
||||||
|
ContentKey string `json:",default=content"`
|
||||||
|
// DurationKey represents the duration key.
|
||||||
|
DurationKey string `json:",default=duration"`
|
||||||
|
// LevelKey represents the level key.
|
||||||
|
LevelKey string `json:",default=level"`
|
||||||
|
// SpanKey represents the span key.
|
||||||
|
SpanKey string `json:",default=span"`
|
||||||
|
// TimestampKey represents the timestamp key.
|
||||||
|
TimestampKey string `json:",default=@timestamp"`
|
||||||
|
// TraceKey represents the trace key.
|
||||||
|
TraceKey string `json:",default=trace"`
|
||||||
|
// TruncatedKey represents the truncated key.
|
||||||
|
TruncatedKey string `json:",default=truncated"`
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|||||||
@@ -276,7 +276,8 @@ func SetUp(c LogConf) (err error) {
|
|||||||
// Because multiple services in one process might call SetUp respectively.
|
// Because multiple services in one process might call SetUp respectively.
|
||||||
// Need to wait for the first caller to complete the execution.
|
// Need to wait for the first caller to complete the execution.
|
||||||
setupOnce.Do(func() {
|
setupOnce.Do(func() {
|
||||||
setupLogLevel(c)
|
setupLogLevel(c.Level)
|
||||||
|
setupFieldKeys(c.FieldKeys)
|
||||||
|
|
||||||
if !c.Stat {
|
if !c.Stat {
|
||||||
DisableStat()
|
DisableStat()
|
||||||
@@ -480,8 +481,35 @@ func handleOptions(opts []LogOption) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func setupLogLevel(c LogConf) {
|
func setupFieldKeys(c fieldKeyConf) {
|
||||||
switch c.Level {
|
if len(c.CallerKey) > 0 {
|
||||||
|
callerKey = c.CallerKey
|
||||||
|
}
|
||||||
|
if len(c.ContentKey) > 0 {
|
||||||
|
contentKey = c.ContentKey
|
||||||
|
}
|
||||||
|
if len(c.DurationKey) > 0 {
|
||||||
|
durationKey = c.DurationKey
|
||||||
|
}
|
||||||
|
if len(c.LevelKey) > 0 {
|
||||||
|
levelKey = c.LevelKey
|
||||||
|
}
|
||||||
|
if len(c.SpanKey) > 0 {
|
||||||
|
spanKey = c.SpanKey
|
||||||
|
}
|
||||||
|
if len(c.TimestampKey) > 0 {
|
||||||
|
timestampKey = c.TimestampKey
|
||||||
|
}
|
||||||
|
if len(c.TraceKey) > 0 {
|
||||||
|
traceKey = c.TraceKey
|
||||||
|
}
|
||||||
|
if len(c.TruncatedKey) > 0 {
|
||||||
|
truncatedKey = c.TruncatedKey
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func setupLogLevel(level string) {
|
||||||
|
switch level {
|
||||||
case levelDebug:
|
case levelDebug:
|
||||||
SetLevel(DebugLevel)
|
SetLevel(DebugLevel)
|
||||||
case levelInfo:
|
case levelInfo:
|
||||||
|
|||||||
@@ -17,6 +17,8 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
"go.opentelemetry.io/otel"
|
||||||
|
"go.opentelemetry.io/otel/sdk/trace"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -245,7 +247,7 @@ func TestStructedLogDebugf(t *testing.T) {
|
|||||||
defer writer.Store(old)
|
defer writer.Store(old)
|
||||||
|
|
||||||
doTestStructedLog(t, levelDebug, w, func(v ...any) {
|
doTestStructedLog(t, levelDebug, w, func(v ...any) {
|
||||||
Debugf(fmt.Sprint(v...))
|
Debugf("%s", fmt.Sprint(v...))
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -557,7 +559,7 @@ func TestStructedLogSlowf(t *testing.T) {
|
|||||||
defer writer.Store(old)
|
defer writer.Store(old)
|
||||||
|
|
||||||
doTestStructedLog(t, levelSlow, w, func(v ...any) {
|
doTestStructedLog(t, levelSlow, w, func(v ...any) {
|
||||||
Slowf(fmt.Sprint(v...))
|
Slowf("%s", fmt.Sprint(v...))
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -623,7 +625,7 @@ func TestStructedLogStatf(t *testing.T) {
|
|||||||
defer writer.Store(old)
|
defer writer.Store(old)
|
||||||
|
|
||||||
doTestStructedLog(t, levelStat, w, func(v ...any) {
|
doTestStructedLog(t, levelStat, w, func(v ...any) {
|
||||||
Statf(fmt.Sprint(v...))
|
Statf("%s", fmt.Sprint(v...))
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -643,7 +645,7 @@ func TestStructedLogSeveref(t *testing.T) {
|
|||||||
defer writer.Store(old)
|
defer writer.Store(old)
|
||||||
|
|
||||||
doTestStructedLog(t, levelSevere, w, func(v ...any) {
|
doTestStructedLog(t, levelSevere, w, func(v ...any) {
|
||||||
Severef(fmt.Sprint(v...))
|
Severef("%s", fmt.Sprint(v...))
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -777,15 +779,9 @@ func TestSetup(t *testing.T) {
|
|||||||
MaxBackups: 3,
|
MaxBackups: 3,
|
||||||
MaxSize: 1024 * 1024,
|
MaxSize: 1024 * 1024,
|
||||||
}))
|
}))
|
||||||
setupLogLevel(LogConf{
|
setupLogLevel(levelInfo)
|
||||||
Level: levelInfo,
|
setupLogLevel(levelError)
|
||||||
})
|
setupLogLevel(levelSevere)
|
||||||
setupLogLevel(LogConf{
|
|
||||||
Level: levelError,
|
|
||||||
})
|
|
||||||
setupLogLevel(LogConf{
|
|
||||||
Level: levelSevere,
|
|
||||||
})
|
|
||||||
_, err := createOutput("")
|
_, err := createOutput("")
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
Disable()
|
Disable()
|
||||||
@@ -1157,3 +1153,66 @@ func (s *countingStringer) String() string {
|
|||||||
atomic.AddInt32(&s.count, 1)
|
atomic.AddInt32(&s.count, 1)
|
||||||
return "countingStringer"
|
return "countingStringer"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestLogKey(t *testing.T) {
|
||||||
|
setupOnce = sync.Once{}
|
||||||
|
MustSetup(LogConf{
|
||||||
|
ServiceName: "any",
|
||||||
|
Mode: "console",
|
||||||
|
Encoding: "json",
|
||||||
|
TimeFormat: timeFormat,
|
||||||
|
FieldKeys: fieldKeyConf{
|
||||||
|
CallerKey: "_caller",
|
||||||
|
ContentKey: "_content",
|
||||||
|
DurationKey: "_duration",
|
||||||
|
LevelKey: "_level",
|
||||||
|
SpanKey: "_span",
|
||||||
|
TimestampKey: "_timestamp",
|
||||||
|
TraceKey: "_trace",
|
||||||
|
TruncatedKey: "_truncated",
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Cleanup(func() {
|
||||||
|
setupFieldKeys(fieldKeyConf{
|
||||||
|
CallerKey: defaultCallerKey,
|
||||||
|
ContentKey: defaultContentKey,
|
||||||
|
DurationKey: defaultDurationKey,
|
||||||
|
LevelKey: defaultLevelKey,
|
||||||
|
SpanKey: defaultSpanKey,
|
||||||
|
TimestampKey: defaultTimestampKey,
|
||||||
|
TraceKey: defaultTraceKey,
|
||||||
|
TruncatedKey: defaultTruncatedKey,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
const message = "hello there"
|
||||||
|
w := new(mockWriter)
|
||||||
|
old := writer.Swap(w)
|
||||||
|
defer writer.Store(old)
|
||||||
|
|
||||||
|
otp := otel.GetTracerProvider()
|
||||||
|
tp := trace.NewTracerProvider(trace.WithSampler(trace.AlwaysSample()))
|
||||||
|
otel.SetTracerProvider(tp)
|
||||||
|
defer otel.SetTracerProvider(otp)
|
||||||
|
|
||||||
|
ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id")
|
||||||
|
defer span.End()
|
||||||
|
|
||||||
|
WithContext(ctx).WithDuration(time.Second).Info(message)
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
var m map[string]string
|
||||||
|
if err := json.Unmarshal([]byte(w.String()), &m); err != nil {
|
||||||
|
t.Error(err)
|
||||||
|
}
|
||||||
|
assert.Equal(t, "info", m["_level"])
|
||||||
|
assert.Equal(t, message, m["_content"])
|
||||||
|
assert.Equal(t, "1000.0ms", m["_duration"])
|
||||||
|
assert.Regexp(t, `logx/logs_test.go:\d+`, m["_caller"])
|
||||||
|
assert.NotEmpty(t, m["_trace"])
|
||||||
|
assert.NotEmpty(t, m["_span"])
|
||||||
|
parsedTime, err := time.Parse(timeFormat, m["_timestamp"])
|
||||||
|
assert.True(t, err == nil)
|
||||||
|
assert.Equal(t, now.Minute(), parsedTime.Minute())
|
||||||
|
}
|
||||||
|
|||||||
@@ -423,3 +423,49 @@ type mockValue struct {
|
|||||||
Foo string `json:"foo"`
|
Foo string `json:"foo"`
|
||||||
Content any `json:"content"`
|
Content any `json:"content"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type testJson struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Age int `json:"age"`
|
||||||
|
Score float64 `json:"score"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t testJson) MarshalJSON() ([]byte, error) {
|
||||||
|
type testJsonImpl testJson
|
||||||
|
return json.Marshal(testJsonImpl(t))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t testJson) String() string {
|
||||||
|
return fmt.Sprintf("%s %d %f", t.Name, t.Age, t.Score)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLogWithJson(t *testing.T) {
|
||||||
|
w := new(mockWriter)
|
||||||
|
old := writer.Swap(w)
|
||||||
|
writer.lock.RLock()
|
||||||
|
defer func() {
|
||||||
|
writer.lock.RUnlock()
|
||||||
|
writer.Store(old)
|
||||||
|
}()
|
||||||
|
|
||||||
|
l := WithContext(context.Background()).WithFields(Field("bar", testJson{
|
||||||
|
Name: "foo",
|
||||||
|
Age: 1,
|
||||||
|
Score: 1.0,
|
||||||
|
}))
|
||||||
|
l.Info(testlog)
|
||||||
|
|
||||||
|
type mockValue2 struct {
|
||||||
|
mockValue
|
||||||
|
Bar testJson `json:"bar"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var val mockValue2
|
||||||
|
err := json.Unmarshal([]byte(w.String()), &val)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
assert.Equal(t, testlog, val.Content)
|
||||||
|
assert.Equal(t, "foo", val.Bar.Name)
|
||||||
|
assert.Equal(t, 1, val.Bar.Age)
|
||||||
|
assert.Equal(t, 1.0, val.Bar.Score)
|
||||||
|
}
|
||||||
|
|||||||
@@ -66,7 +66,7 @@ type (
|
|||||||
gzip bool
|
gzip bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// SizeLimitRotateRule a rotation rule that make the log file rotated base on size
|
// SizeLimitRotateRule a rotation rule that makes the log file rotated based on size
|
||||||
SizeLimitRotateRule struct {
|
SizeLimitRotateRule struct {
|
||||||
DailyRotateRule
|
DailyRotateRule
|
||||||
maxSize int64
|
maxSize int64
|
||||||
|
|||||||
@@ -53,14 +53,14 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
callerKey = "caller"
|
defaultCallerKey = "caller"
|
||||||
contentKey = "content"
|
defaultContentKey = "content"
|
||||||
durationKey = "duration"
|
defaultDurationKey = "duration"
|
||||||
levelKey = "level"
|
defaultLevelKey = "level"
|
||||||
spanKey = "span"
|
defaultSpanKey = "span"
|
||||||
timestampKey = "@timestamp"
|
defaultTimestampKey = "@timestamp"
|
||||||
traceKey = "trace"
|
defaultTraceKey = "trace"
|
||||||
truncatedKey = "truncated"
|
defaultTruncatedKey = "truncated"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -73,3 +73,14 @@ var (
|
|||||||
|
|
||||||
truncatedField = Field(truncatedKey, true)
|
truncatedField = Field(truncatedKey, true)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
callerKey = defaultCallerKey
|
||||||
|
contentKey = defaultContentKey
|
||||||
|
durationKey = defaultDurationKey
|
||||||
|
levelKey = defaultLevelKey
|
||||||
|
spanKey = defaultSpanKey
|
||||||
|
timestampKey = defaultTimestampKey
|
||||||
|
traceKey = defaultTraceKey
|
||||||
|
truncatedKey = defaultTruncatedKey
|
||||||
|
)
|
||||||
|
|||||||
@@ -212,7 +212,6 @@ func newFileWriter(c LogConf) (Writer, error) {
|
|||||||
statFile := path.Join(c.Path, statFilename)
|
statFile := path.Join(c.Path, statFilename)
|
||||||
|
|
||||||
handleOptions(opts)
|
handleOptions(opts)
|
||||||
setupLogLevel(c)
|
|
||||||
|
|
||||||
if infoLog, err = createOutput(accessFile); err != nil {
|
if infoLog, err = createOutput(accessFile); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -423,6 +422,8 @@ func processFieldValue(value any) any {
|
|||||||
times = append(times, fmt.Sprint(t))
|
times = append(times, fmt.Sprint(t))
|
||||||
}
|
}
|
||||||
return times
|
return times
|
||||||
|
case json.Marshaler:
|
||||||
|
return val
|
||||||
case fmt.Stringer:
|
case fmt.Stringer:
|
||||||
return encodeStringer(val)
|
return encodeStringer(val)
|
||||||
case []fmt.Stringer:
|
case []fmt.Stringer:
|
||||||
@@ -443,6 +444,8 @@ func wrapLevelWithColor(level string) string {
|
|||||||
colour = color.FgRed
|
colour = color.FgRed
|
||||||
case levelError:
|
case levelError:
|
||||||
colour = color.FgRed
|
colour = color.FgRed
|
||||||
|
case levelSevere:
|
||||||
|
colour = color.FgRed
|
||||||
case levelFatal:
|
case levelFatal:
|
||||||
colour = color.FgRed
|
colour = color.FgRed
|
||||||
case levelInfo:
|
case levelInfo:
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package mapping
|
package mapping
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"cmp"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -12,7 +13,6 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/lang"
|
"github.com/zeromicro/go-zero/core/lang"
|
||||||
"github.com/zeromicro/go-zero/core/stringx"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -104,14 +104,13 @@ func convertToString(val any, fullName string) (string, error) {
|
|||||||
func convertTypeFromString(kind reflect.Kind, str string) (any, error) {
|
func convertTypeFromString(kind reflect.Kind, str string) (any, error) {
|
||||||
switch kind {
|
switch kind {
|
||||||
case reflect.Bool:
|
case reflect.Bool:
|
||||||
switch strings.ToLower(str) {
|
if str == "1" || strings.EqualFold(str, "true") {
|
||||||
case "1", "true":
|
|
||||||
return true, nil
|
return true, nil
|
||||||
case "0", "false":
|
|
||||||
return false, nil
|
|
||||||
default:
|
|
||||||
return false, errTypeMismatch
|
|
||||||
}
|
}
|
||||||
|
if str == "0" || strings.EqualFold(str, "false") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, errTypeMismatch
|
||||||
case reflect.Int:
|
case reflect.Int:
|
||||||
return strconv.ParseInt(str, 10, intSize)
|
return strconv.ParseInt(str, 10, intSize)
|
||||||
case reflect.Int8:
|
case reflect.Int8:
|
||||||
@@ -279,7 +278,7 @@ func parseKeyAndOptions(tagName string, field reflect.StructField) (string, *fie
|
|||||||
cache, ok := optionsCache[value]
|
cache, ok := optionsCache[value]
|
||||||
cacheLock.RUnlock()
|
cacheLock.RUnlock()
|
||||||
if ok {
|
if ok {
|
||||||
return stringx.TakeOne(cache.key, field.Name), cache.options, cache.err
|
return cmp.Or(cache.key, field.Name), cache.options, cache.err
|
||||||
}
|
}
|
||||||
|
|
||||||
key, options, err := doParseKeyAndOptions(field, value)
|
key, options, err := doParseKeyAndOptions(field, value)
|
||||||
@@ -291,7 +290,7 @@ func parseKeyAndOptions(tagName string, field reflect.StructField) (string, *fie
|
|||||||
}
|
}
|
||||||
cacheLock.Unlock()
|
cacheLock.Unlock()
|
||||||
|
|
||||||
return stringx.TakeOne(key, field.Name), options, err
|
return cmp.Or(key, field.Name), options, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// support below notations:
|
// support below notations:
|
||||||
|
|||||||
@@ -334,3 +334,43 @@ func TestValidateValueRange(t *testing.T) {
|
|||||||
func TestSetMatchedPrimitiveValue(t *testing.T) {
|
func TestSetMatchedPrimitiveValue(t *testing.T) {
|
||||||
assert.Error(t, setMatchedPrimitiveValue(reflect.Func, reflect.ValueOf(2), "1"))
|
assert.Error(t, setMatchedPrimitiveValue(reflect.Func, reflect.ValueOf(2), "1"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestConvertTypeFromString_Bool(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
want bool
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
// true cases
|
||||||
|
{name: "1", input: "1", want: true, wantErr: false},
|
||||||
|
{name: "true lowercase", input: "true", want: true, wantErr: false},
|
||||||
|
{name: "True mixed", input: "True", want: true, wantErr: false},
|
||||||
|
{name: "TRUE uppercase", input: "TRUE", want: true, wantErr: false},
|
||||||
|
{name: "TrUe mixed", input: "TrUe", want: true, wantErr: false},
|
||||||
|
// false cases
|
||||||
|
{name: "0", input: "0", want: false, wantErr: false},
|
||||||
|
{name: "false lowercase", input: "false", want: false, wantErr: false},
|
||||||
|
{name: "False mixed", input: "False", want: false, wantErr: false},
|
||||||
|
{name: "FALSE uppercase", input: "FALSE", want: false, wantErr: false},
|
||||||
|
{name: "FaLsE mixed", input: "FaLsE", want: false, wantErr: false},
|
||||||
|
// error cases
|
||||||
|
{name: "invalid yes", input: "yes", want: false, wantErr: true},
|
||||||
|
{name: "invalid no", input: "no", want: false, wantErr: true},
|
||||||
|
{name: "invalid empty", input: "", want: false, wantErr: true},
|
||||||
|
{name: "invalid 2", input: "2", want: false, wantErr: true},
|
||||||
|
{name: "invalid truee", input: "truee", want: false, wantErr: true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
got, err := convertTypeFromString(reflect.Bool, tt.input)
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
} else {
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, tt.want, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -29,3 +29,10 @@ func TestCalcDiffEntropy(t *testing.T) {
|
|||||||
}
|
}
|
||||||
assert.True(t, CalcEntropy(m) < .99)
|
assert.True(t, CalcEntropy(m) < .99)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestCalcEntropySingleItem(t *testing.T) {
|
||||||
|
m := map[any]int{
|
||||||
|
"only": 42,
|
||||||
|
}
|
||||||
|
assert.Equal(t, float64(1), CalcEntropy(m))
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
package mathx
|
package mathx
|
||||||
|
|
||||||
|
// Numerical is a constraint that permits any numeric type.
|
||||||
type Numerical interface {
|
type Numerical interface {
|
||||||
~int | ~int8 | ~int16 | ~int32 | ~int64 |
|
~int | ~int8 | ~int16 | ~int32 | ~int64 |
|
||||||
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 |
|
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 |
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
// An Unstable is used to generate random value around the mean value base on given deviation.
|
// An Unstable is used to generate random value around the mean value based on given deviation.
|
||||||
type Unstable struct {
|
type Unstable struct {
|
||||||
deviation float64
|
deviation float64
|
||||||
r *rand.Rand
|
r *rand.Rand
|
||||||
|
|||||||
@@ -3,6 +3,9 @@ package mr
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"runtime/debug"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
|
|
||||||
@@ -183,12 +186,16 @@ func buildOptions(opts ...Option) *mapReduceOptions {
|
|||||||
return options
|
return options
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func buildPanicInfo(r any, stack []byte) string {
|
||||||
|
return fmt.Sprintf("%+v\n\n%s", r, strings.TrimSpace(string(stack)))
|
||||||
|
}
|
||||||
|
|
||||||
func buildSource[T any](generate GenerateFunc[T], panicChan *onceChan) chan T {
|
func buildSource[T any](generate GenerateFunc[T], panicChan *onceChan) chan T {
|
||||||
source := make(chan T)
|
source := make(chan T)
|
||||||
go func() {
|
go func() {
|
||||||
defer func() {
|
defer func() {
|
||||||
if r := recover(); r != nil {
|
if r := recover(); r != nil {
|
||||||
panicChan.write(r)
|
panicChan.write(buildPanicInfo(r, debug.Stack()))
|
||||||
}
|
}
|
||||||
close(source)
|
close(source)
|
||||||
}()
|
}()
|
||||||
@@ -235,7 +242,7 @@ func executeMappers[T, U any](mCtx mapperContext[T, U]) {
|
|||||||
defer func() {
|
defer func() {
|
||||||
if r := recover(); r != nil {
|
if r := recover(); r != nil {
|
||||||
atomic.AddInt32(&failed, 1)
|
atomic.AddInt32(&failed, 1)
|
||||||
mCtx.panicChan.write(r)
|
mCtx.panicChan.write(buildPanicInfo(r, debug.Stack()))
|
||||||
}
|
}
|
||||||
wg.Done()
|
wg.Done()
|
||||||
<-pool
|
<-pool
|
||||||
@@ -289,7 +296,7 @@ func mapReduceWithPanicChan[T, U, V any](source <-chan T, panicChan *onceChan, m
|
|||||||
defer func() {
|
defer func() {
|
||||||
drain(collector)
|
drain(collector)
|
||||||
if r := recover(); r != nil {
|
if r := recover(); r != nil {
|
||||||
panicChan.write(r)
|
panicChan.write(buildPanicInfo(r, debug.Stack()))
|
||||||
}
|
}
|
||||||
finish()
|
finish()
|
||||||
}()
|
}()
|
||||||
|
|||||||
@@ -3,8 +3,7 @@ package mr
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"io"
|
"fmt"
|
||||||
"log"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"testing"
|
"testing"
|
||||||
@@ -16,9 +15,6 @@ import (
|
|||||||
|
|
||||||
var errDummy = errors.New("dummy")
|
var errDummy = errors.New("dummy")
|
||||||
|
|
||||||
func init() {
|
|
||||||
log.SetOutput(io.Discard)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFinish(t *testing.T) {
|
func TestFinish(t *testing.T) {
|
||||||
defer goleak.VerifyNone(t)
|
defer goleak.VerifyNone(t)
|
||||||
@@ -148,11 +144,28 @@ func TestForEach(t *testing.T) {
|
|||||||
|
|
||||||
assert.Equal(t, tasks/2, int(count))
|
assert.Equal(t, tasks/2, int(count))
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
|
||||||
t.Run("all", func(t *testing.T) {
|
func TestPanics(t *testing.T) {
|
||||||
defer goleak.VerifyNone(t)
|
defer goleak.VerifyNone(t)
|
||||||
|
|
||||||
|
const tasks = 1000
|
||||||
|
verify := func(t *testing.T, r any) {
|
||||||
|
panicStr := fmt.Sprintf("%v", r)
|
||||||
|
assert.Contains(t, panicStr, "foo")
|
||||||
|
assert.Contains(t, panicStr, "goroutine")
|
||||||
|
assert.Contains(t, panicStr, "runtime/debug.Stack")
|
||||||
|
panic(r)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Run("ForEach run panics", func(t *testing.T) {
|
||||||
|
assert.Panics(t, func() {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
verify(t, r)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
assert.PanicsWithValue(t, "foo", func() {
|
|
||||||
ForEach(func(source chan<- int) {
|
ForEach(func(source chan<- int) {
|
||||||
for i := 0; i < tasks; i++ {
|
for i := 0; i < tasks; i++ {
|
||||||
source <- i
|
source <- i
|
||||||
@@ -162,28 +175,31 @@ func TestForEach(t *testing.T) {
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
}
|
|
||||||
|
|
||||||
func TestGeneratePanic(t *testing.T) {
|
t.Run("ForEach generate panics", func(t *testing.T) {
|
||||||
defer goleak.VerifyNone(t)
|
assert.Panics(t, func() {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
verify(t, r)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
t.Run("all", func(t *testing.T) {
|
|
||||||
assert.PanicsWithValue(t, "foo", func() {
|
|
||||||
ForEach(func(source chan<- int) {
|
ForEach(func(source chan<- int) {
|
||||||
panic("foo")
|
panic("foo")
|
||||||
}, func(item int) {
|
}, func(item int) {
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
}
|
|
||||||
|
|
||||||
func TestMapperPanic(t *testing.T) {
|
|
||||||
defer goleak.VerifyNone(t)
|
|
||||||
|
|
||||||
const tasks = 1000
|
|
||||||
var run int32
|
var run int32
|
||||||
t.Run("all", func(t *testing.T) {
|
t.Run("Mapper panics", func(t *testing.T) {
|
||||||
assert.PanicsWithValue(t, "foo", func() {
|
assert.Panics(t, func() {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
verify(t, r)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
_, _ = MapReduce(func(source chan<- int) {
|
_, _ = MapReduce(func(source chan<- int) {
|
||||||
for i := 0; i < tasks; i++ {
|
for i := 0; i < tasks; i++ {
|
||||||
source <- i
|
source <- i
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"runtime/debug"
|
||||||
|
"runtime/metrics"
|
||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -28,10 +30,29 @@ func displayStatsWithWriter(writer io.Writer, interval ...time.Duration) {
|
|||||||
ticker := time.NewTicker(duration)
|
ticker := time.NewTicker(duration)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
for range ticker.C {
|
for range ticker.C {
|
||||||
var m runtime.MemStats
|
var (
|
||||||
runtime.ReadMemStats(&m)
|
alloc, totalAlloc, sys uint64
|
||||||
|
samples = []metrics.Sample{
|
||||||
|
{Name: "/memory/classes/heap/objects:bytes"},
|
||||||
|
{Name: "/gc/heap/allocs:bytes"},
|
||||||
|
{Name: "/memory/classes/total:bytes"},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
metrics.Read(samples)
|
||||||
|
|
||||||
|
if samples[0].Value.Kind() == metrics.KindUint64 {
|
||||||
|
alloc = samples[0].Value.Uint64()
|
||||||
|
}
|
||||||
|
if samples[1].Value.Kind() == metrics.KindUint64 {
|
||||||
|
totalAlloc = samples[1].Value.Uint64()
|
||||||
|
}
|
||||||
|
if samples[2].Value.Kind() == metrics.KindUint64 {
|
||||||
|
sys = samples[2].Value.Uint64()
|
||||||
|
}
|
||||||
|
var stats debug.GCStats
|
||||||
|
debug.ReadGCStats(&stats)
|
||||||
fmt.Fprintf(writer, "Goroutines: %d, Alloc: %vm, TotalAlloc: %vm, Sys: %vm, NumGC: %v\n",
|
fmt.Fprintf(writer, "Goroutines: %d, Alloc: %vm, TotalAlloc: %vm, Sys: %vm, NumGC: %v\n",
|
||||||
runtime.NumGoroutine(), m.Alloc/mega, m.TotalAlloc/mega, m.Sys/mega, m.NumGC)
|
runtime.NumGoroutine(), alloc/mega, totalAlloc/mega, sys/mega, stats.NumGC)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
package stat
|
package stat
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"runtime"
|
"runtime/debug"
|
||||||
|
"runtime/metrics"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -56,8 +57,28 @@ func bToMb(b uint64) float32 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func printUsage() {
|
func printUsage() {
|
||||||
var m runtime.MemStats
|
var (
|
||||||
runtime.ReadMemStats(&m)
|
alloc, totalAlloc, sys uint64
|
||||||
|
samples = []metrics.Sample{
|
||||||
|
{Name: "/memory/classes/heap/objects:bytes"},
|
||||||
|
{Name: "/gc/heap/allocs:bytes"},
|
||||||
|
{Name: "/memory/classes/total:bytes"},
|
||||||
|
}
|
||||||
|
stats debug.GCStats
|
||||||
|
)
|
||||||
|
metrics.Read(samples)
|
||||||
|
|
||||||
|
if samples[0].Value.Kind() == metrics.KindUint64 {
|
||||||
|
alloc = samples[0].Value.Uint64()
|
||||||
|
}
|
||||||
|
if samples[1].Value.Kind() == metrics.KindUint64 {
|
||||||
|
totalAlloc = samples[1].Value.Uint64()
|
||||||
|
}
|
||||||
|
if samples[2].Value.Kind() == metrics.KindUint64 {
|
||||||
|
sys = samples[2].Value.Uint64()
|
||||||
|
}
|
||||||
|
debug.ReadGCStats(&stats)
|
||||||
|
|
||||||
logx.Statf("CPU: %dm, MEMORY: Alloc=%.1fMi, TotalAlloc=%.1fMi, Sys=%.1fMi, NumGC=%d",
|
logx.Statf("CPU: %dm, MEMORY: Alloc=%.1fMi, TotalAlloc=%.1fMi, Sys=%.1fMi, NumGC=%d",
|
||||||
CpuUsage(), bToMb(m.Alloc), bToMb(m.TotalAlloc), bToMb(m.Sys), m.NumGC)
|
CpuUsage(), bToMb(alloc), bToMb(totalAlloc), bToMb(sys), stats.NumGC)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -532,7 +532,7 @@ func createModel(t *testing.T, coll mon.Collection) *Model {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// mustNewTestModel returns a test Model with the given cache.
|
// mustNewTestModel returns a test Model with the given cache.
|
||||||
func mustNewTestModel(collection mon.Collection, c cache.CacheConf, opts ...cache.Option) *Model {
|
func mustNewTestModel(collection mon.Collection, c cache.CacheConf, opts ...cache.Option) *Model {
|
||||||
return &Model{
|
return &Model{
|
||||||
Model: &mon.Model{
|
Model: &mon.Model{
|
||||||
|
|||||||
@@ -65,6 +65,7 @@ type (
|
|||||||
// RedisNode interface represents a redis node.
|
// RedisNode interface represents a redis node.
|
||||||
RedisNode interface {
|
RedisNode interface {
|
||||||
red.Cmdable
|
red.Cmdable
|
||||||
|
Do(ctx context.Context, args ...any) *red.Cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
// GeoLocation is used with GeoAdd to add geospatial location.
|
// GeoLocation is used with GeoAdd to add geospatial location.
|
||||||
@@ -259,12 +260,34 @@ func (s *Redis) BitPosCtx(ctx context.Context, key string, bit, start, end int64
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Blpop uses passed in redis connection to execute blocking queries.
|
// Blpop uses passed in redis connection to execute blocking queries.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode to avoid
|
||||||
|
// exhausting the connection pool. Blocking commands hold connections for extended periods and should
|
||||||
|
// not share the regular connection pool.
|
||||||
|
//
|
||||||
|
// Example usage:
|
||||||
|
//
|
||||||
|
// node, err := redis.CreateBlockingNode(rds)
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer node.Close()
|
||||||
|
//
|
||||||
|
// value, err := rds.Blpop(node, "mylist")
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
//
|
||||||
// Doesn't benefit from pooling redis connections of blocking queries
|
// Doesn't benefit from pooling redis connections of blocking queries
|
||||||
func (s *Redis) Blpop(node RedisNode, key string) (string, error) {
|
func (s *Redis) Blpop(node RedisNode, key string) (string, error) {
|
||||||
return s.BlpopCtx(context.Background(), node, key)
|
return s.BlpopCtx(context.Background(), node, key)
|
||||||
}
|
}
|
||||||
|
|
||||||
// BlpopCtx uses passed in redis connection to execute blocking queries.
|
// BlpopCtx uses passed in redis connection to execute blocking queries.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode.
|
||||||
|
// See Blpop for usage examples.
|
||||||
|
//
|
||||||
// Doesn't benefit from pooling redis connections of blocking queries
|
// Doesn't benefit from pooling redis connections of blocking queries
|
||||||
func (s *Redis) BlpopCtx(ctx context.Context, node RedisNode, key string) (string, error) {
|
func (s *Redis) BlpopCtx(ctx context.Context, node RedisNode, key string) (string, error) {
|
||||||
return s.BlpopWithTimeoutCtx(ctx, node, blockingQueryTimeout, key)
|
return s.BlpopWithTimeoutCtx(ctx, node, blockingQueryTimeout, key)
|
||||||
@@ -272,12 +295,18 @@ func (s *Redis) BlpopCtx(ctx context.Context, node RedisNode, key string) (strin
|
|||||||
|
|
||||||
// BlpopEx uses passed in redis connection to execute blpop command.
|
// BlpopEx uses passed in redis connection to execute blpop command.
|
||||||
// The difference against Blpop is that this method returns a bool to indicate success.
|
// The difference against Blpop is that this method returns a bool to indicate success.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode.
|
||||||
|
// See Blpop for usage examples.
|
||||||
func (s *Redis) BlpopEx(node RedisNode, key string) (string, bool, error) {
|
func (s *Redis) BlpopEx(node RedisNode, key string) (string, bool, error) {
|
||||||
return s.BlpopExCtx(context.Background(), node, key)
|
return s.BlpopExCtx(context.Background(), node, key)
|
||||||
}
|
}
|
||||||
|
|
||||||
// BlpopExCtx uses passed in redis connection to execute blpop command.
|
// BlpopExCtx uses passed in redis connection to execute blpop command.
|
||||||
// The difference against Blpop is that this method returns a bool to indicate success.
|
// The difference against Blpop is that this method returns a bool to indicate success.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode.
|
||||||
|
// See Blpop for usage examples.
|
||||||
func (s *Redis) BlpopExCtx(ctx context.Context, node RedisNode, key string) (string, bool, error) {
|
func (s *Redis) BlpopExCtx(ctx context.Context, node RedisNode, key string) (string, bool, error) {
|
||||||
if node == nil {
|
if node == nil {
|
||||||
return "", false, ErrNilNode
|
return "", false, ErrNilNode
|
||||||
@@ -297,12 +326,18 @@ func (s *Redis) BlpopExCtx(ctx context.Context, node RedisNode, key string) (str
|
|||||||
|
|
||||||
// BlpopWithTimeout uses passed in redis connection to execute blpop command.
|
// BlpopWithTimeout uses passed in redis connection to execute blpop command.
|
||||||
// Control blocking query timeout
|
// Control blocking query timeout
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode.
|
||||||
|
// See Blpop for usage examples.
|
||||||
func (s *Redis) BlpopWithTimeout(node RedisNode, timeout time.Duration, key string) (string, error) {
|
func (s *Redis) BlpopWithTimeout(node RedisNode, timeout time.Duration, key string) (string, error) {
|
||||||
return s.BlpopWithTimeoutCtx(context.Background(), node, timeout, key)
|
return s.BlpopWithTimeoutCtx(context.Background(), node, timeout, key)
|
||||||
}
|
}
|
||||||
|
|
||||||
// BlpopWithTimeoutCtx uses passed in redis connection to execute blpop command.
|
// BlpopWithTimeoutCtx uses passed in redis connection to execute blpop command.
|
||||||
// Control blocking query timeout
|
// Control blocking query timeout
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode.
|
||||||
|
// See Blpop for usage examples.
|
||||||
func (s *Redis) BlpopWithTimeoutCtx(ctx context.Context, node RedisNode, timeout time.Duration,
|
func (s *Redis) BlpopWithTimeoutCtx(ctx context.Context, node RedisNode, timeout time.Duration,
|
||||||
key string) (string, error) {
|
key string) (string, error) {
|
||||||
if node == nil {
|
if node == nil {
|
||||||
@@ -371,6 +406,25 @@ func (s *Redis) DelCtx(ctx context.Context, keys ...string) (int, error) {
|
|||||||
return int(v), nil
|
return int(v), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Do executes a generic redis command with given arguments.
|
||||||
|
func (s *Redis) Do(args ...any) (any, error) {
|
||||||
|
return s.DoCtx(context.Background(), args...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DoCtx executes a generic redis command with given arguments using the provided context.
|
||||||
|
func (s *Redis) DoCtx(ctx context.Context, args ...any) (any, error) {
|
||||||
|
if len(args) == 0 {
|
||||||
|
return nil, errors.New("missing redis command")
|
||||||
|
}
|
||||||
|
|
||||||
|
conn, err := getRedis(s)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return conn.Do(ctx, args...).Result()
|
||||||
|
}
|
||||||
|
|
||||||
// Eval is the implementation of redis eval command.
|
// Eval is the implementation of redis eval command.
|
||||||
func (s *Redis) Eval(script string, keys []string, args ...any) (any, error) {
|
func (s *Redis) Eval(script string, keys []string, args ...any) (any, error) {
|
||||||
return s.EvalCtx(context.Background(), script, keys, args...)
|
return s.EvalCtx(context.Background(), script, keys, args...)
|
||||||
@@ -630,6 +684,28 @@ func (s *Redis) GetDelCtx(ctx context.Context, key string) (string, error) {
|
|||||||
return val, err
|
return val, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetEx is the implementation of redis getex command.
|
||||||
|
// Available since: redis version 6.2.0
|
||||||
|
func (s *Redis) GetEx(key string, seconds int) (string, error) {
|
||||||
|
return s.GetExCtx(context.Background(), key, seconds)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetExCtx is the implementation of redis getex command.
|
||||||
|
// Available since: redis version 6.2.0
|
||||||
|
func (s *Redis) GetExCtx(ctx context.Context, key string, seconds int) (string, error) {
|
||||||
|
conn, err := getRedis(s)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
val, err := conn.GetEx(ctx, key, time.Duration(seconds)*time.Second).Result()
|
||||||
|
if errors.Is(err, red.Nil) {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return val, err
|
||||||
|
}
|
||||||
|
|
||||||
// GetSet is the implementation of redis getset command.
|
// GetSet is the implementation of redis getset command.
|
||||||
func (s *Redis) GetSet(key, value string) (string, error) {
|
func (s *Redis) GetSet(key, value string) (string, error) {
|
||||||
return s.GetSetCtx(context.Background(), key, value)
|
return s.GetSetCtx(context.Background(), key, value)
|
||||||
@@ -1840,6 +1916,29 @@ func (s *Redis) XInfoStreamCtx(ctx context.Context, stream string) (*red.XInfoSt
|
|||||||
|
|
||||||
// XReadGroup reads messages from Redis streams as part of a consumer group.
|
// XReadGroup reads messages from Redis streams as part of a consumer group.
|
||||||
// It allows for distributed processing of stream messages with automatic message delivery semantics.
|
// It allows for distributed processing of stream messages with automatic message delivery semantics.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode to avoid
|
||||||
|
// exhausting the connection pool. Blocking commands hold connections for extended periods and should
|
||||||
|
// not share the regular connection pool.
|
||||||
|
//
|
||||||
|
// Example usage:
|
||||||
|
//
|
||||||
|
// node, err := redis.CreateBlockingNode(rds)
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer node.Close()
|
||||||
|
//
|
||||||
|
// streams, err := rds.XReadGroup(
|
||||||
|
// node, // RedisNode created with CreateBlockingNode
|
||||||
|
// "mygroup", // consumer group name
|
||||||
|
// "consumer1", // consumer ID
|
||||||
|
// 10, // max number of messages to read
|
||||||
|
// 5*time.Second, // block duration
|
||||||
|
// false, // noAck flag
|
||||||
|
// "mystream", // stream name
|
||||||
|
// )
|
||||||
|
//
|
||||||
// Doesn't benefit from pooling redis connections of blocking queries.
|
// Doesn't benefit from pooling redis connections of blocking queries.
|
||||||
func (s *Redis) XReadGroup(node RedisNode, group string, consumerId string, count int64,
|
func (s *Redis) XReadGroup(node RedisNode, group string, consumerId string, count int64,
|
||||||
block time.Duration, noAck bool, streams ...string) ([]red.XStream, error) {
|
block time.Duration, noAck bool, streams ...string) ([]red.XStream, error) {
|
||||||
@@ -1847,6 +1946,10 @@ func (s *Redis) XReadGroup(node RedisNode, group string, consumerId string, coun
|
|||||||
}
|
}
|
||||||
|
|
||||||
// XReadGroupCtx is the context-aware version of XReadGroup.
|
// XReadGroupCtx is the context-aware version of XReadGroup.
|
||||||
|
//
|
||||||
|
// For blocking operations, you must create a dedicated RedisNode using CreateBlockingNode to avoid
|
||||||
|
// exhausting the connection pool. See XReadGroup for usage examples.
|
||||||
|
//
|
||||||
// Doesn't benefit from pooling redis connections of blocking queries.
|
// Doesn't benefit from pooling redis connections of blocking queries.
|
||||||
func (s *Redis) XReadGroupCtx(ctx context.Context, node RedisNode, group string, consumerId string,
|
func (s *Redis) XReadGroupCtx(ctx context.Context, node RedisNode, group string, consumerId string,
|
||||||
count int64, block time.Duration, noAck bool, streams ...string) ([]red.XStream, error) {
|
count int64, block time.Duration, noAck bool, streams ...string) ([]red.XStream, error) {
|
||||||
|
|||||||
@@ -275,6 +275,36 @@ func TestRedis_Eval(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRedis_Do(t *testing.T) {
|
||||||
|
runOnRedis(t, func(client *Redis) {
|
||||||
|
_, err := newRedis(client.Addr, badType()).Do("PING")
|
||||||
|
assert.NotNil(t, err)
|
||||||
|
|
||||||
|
pong, err := client.Do("PING")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "PONG", pong)
|
||||||
|
|
||||||
|
ok, err := client.Do("SET", "key1", "value1")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "OK", ok)
|
||||||
|
|
||||||
|
val, err := client.Do("GET", "key1")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "value1", val)
|
||||||
|
|
||||||
|
_, err = client.Do("GET", "not_exist")
|
||||||
|
assert.Equal(t, Nil, err)
|
||||||
|
|
||||||
|
_, err = client.Do()
|
||||||
|
assert.NotNil(t, err)
|
||||||
|
|
||||||
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
|
cancel()
|
||||||
|
_, err = client.DoCtx(ctx, "PING")
|
||||||
|
assert.Equal(t, context.Canceled, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func TestRedis_ScriptRun(t *testing.T) {
|
func TestRedis_ScriptRun(t *testing.T) {
|
||||||
runOnRedis(t, func(client *Redis) {
|
runOnRedis(t, func(client *Redis) {
|
||||||
sc := NewScript(`redis.call("EXISTS", KEYS[1])`)
|
sc := NewScript(`redis.call("EXISTS", KEYS[1])`)
|
||||||
@@ -1104,6 +1134,45 @@ func TestRedis_GetDel(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRedis_GetEx(t *testing.T) {
|
||||||
|
t.Run("get_ex", func(t *testing.T) {
|
||||||
|
runOnRedis(t, func(client *Redis) {
|
||||||
|
val, err := client.GetEx("getex_key", 10)
|
||||||
|
assert.Equal(t, "", val)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
err = client.Set("getex_key", "getex_value")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
|
||||||
|
val, err = client.GetEx("getex_key", 10)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "getex_value", val)
|
||||||
|
val, err = client.Get("getex_key")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "getex_value", val)
|
||||||
|
|
||||||
|
ttl, err := client.Ttl("getex_key")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.True(t, ttl > 0 && ttl <= 10)
|
||||||
|
|
||||||
|
val, err = client.GetEx("getex_key", 5)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, "getex_value", val)
|
||||||
|
|
||||||
|
ttl, err = client.Ttl("getex_key")
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.True(t, ttl > 0 && ttl <= 5)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("get_ex_with_error", func(t *testing.T) {
|
||||||
|
runOnRedisWithError(t, func(client *Redis) {
|
||||||
|
_, err := newRedis(client.Addr, badType()).GetEx("hello", 10)
|
||||||
|
assert.Error(t, err)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func TestRedis_GetSet(t *testing.T) {
|
func TestRedis_GetSet(t *testing.T) {
|
||||||
t.Run("set_get", func(t *testing.T) {
|
t.Run("set_get", func(t *testing.T) {
|
||||||
runOnRedis(t, func(client *Redis) {
|
runOnRedis(t, func(client *Redis) {
|
||||||
|
|||||||
@@ -13,7 +13,37 @@ type ClosableNode interface {
|
|||||||
Close()
|
Close()
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateBlockingNode returns a ClosableNode.
|
// CreateBlockingNode creates a dedicated RedisNode for blocking operations.
|
||||||
|
//
|
||||||
|
// Blocking Redis commands (like BLPOP, BRPOP, XREADGROUP with block parameter) hold connections
|
||||||
|
// for extended periods while waiting for data. Using them with the regular Redis connection pool
|
||||||
|
// can exhaust all available connections, causing other operations to fail or timeout.
|
||||||
|
//
|
||||||
|
// CreateBlockingNode creates a separate Redis client with a minimal connection pool (size 1) that
|
||||||
|
// is dedicated to blocking operations. This ensures blocking commands don't interfere with regular
|
||||||
|
// Redis operations.
|
||||||
|
//
|
||||||
|
// Example usage:
|
||||||
|
//
|
||||||
|
// rds := redis.MustNewRedis(redis.RedisConf{
|
||||||
|
// Host: "localhost:6379",
|
||||||
|
// Type: redis.NodeType,
|
||||||
|
// })
|
||||||
|
//
|
||||||
|
// // Create a dedicated node for blocking operations
|
||||||
|
// node, err := redis.CreateBlockingNode(rds)
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer node.Close() // Important: close the node when done
|
||||||
|
//
|
||||||
|
// // Use the node for blocking operations
|
||||||
|
// value, err := rds.Blpop(node, "mylist")
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// The returned ClosableNode must be closed when no longer needed to release resources.
|
||||||
func CreateBlockingNode(r *Redis) (ClosableNode, error) {
|
func CreateBlockingNode(r *Redis) (ClosableNode, error) {
|
||||||
timeout := readWriteTimeout + blockingQueryTimeout
|
timeout := readWriteTimeout + blockingQueryTimeout
|
||||||
|
|
||||||
|
|||||||
@@ -70,25 +70,16 @@ func getTaggedFieldValueMap(v reflect.Value) (map[string]any, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func getValueInterface(value reflect.Value) (any, error) {
|
func getValueInterface(value reflect.Value) (any, error) {
|
||||||
switch value.Kind() {
|
if !value.CanAddr() || !value.Addr().CanInterface() {
|
||||||
case reflect.Ptr:
|
return nil, ErrNotReadableValue
|
||||||
if !value.CanInterface() {
|
|
||||||
return nil, ErrNotReadableValue
|
|
||||||
}
|
|
||||||
|
|
||||||
if value.IsNil() {
|
|
||||||
baseValueType := mapping.Deref(value.Type())
|
|
||||||
value.Set(reflect.New(baseValueType))
|
|
||||||
}
|
|
||||||
|
|
||||||
return value.Interface(), nil
|
|
||||||
default:
|
|
||||||
if !value.CanAddr() || !value.Addr().CanInterface() {
|
|
||||||
return nil, ErrNotReadableValue
|
|
||||||
}
|
|
||||||
|
|
||||||
return value.Addr().Interface(), nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if value.Kind() == reflect.Pointer && value.IsNil() {
|
||||||
|
baseValueType := mapping.Deref(value.Type())
|
||||||
|
value.Set(reflect.New(baseValueType))
|
||||||
|
}
|
||||||
|
|
||||||
|
return value.Addr().Interface(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func isScanFailed(err error) bool {
|
func isScanFailed(err error) bool {
|
||||||
|
|||||||
@@ -4,7 +4,9 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"errors"
|
"errors"
|
||||||
|
"reflect"
|
||||||
"testing"
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/DATA-DOG/go-sqlmock"
|
"github.com/DATA-DOG/go-sqlmock"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
@@ -1575,6 +1577,782 @@ func TestAnonymousStructPrError(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsZeroValueStructPtr(t *testing.T) {
|
||||||
|
secondNamePtr := "second_ptr"
|
||||||
|
secondAgePtr := int64(30)
|
||||||
|
thirdNamePtr := "third_ptr"
|
||||||
|
thirdAgePtr := int64(0)
|
||||||
|
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
NamePtr *string
|
||||||
|
Age int64
|
||||||
|
AgePtr *int64
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "first",
|
||||||
|
NamePtr: nil,
|
||||||
|
Age: 2,
|
||||||
|
AgePtr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "second",
|
||||||
|
NamePtr: &secondNamePtr,
|
||||||
|
Age: 3,
|
||||||
|
AgePtr: &secondAgePtr,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "",
|
||||||
|
NamePtr: &thirdNamePtr,
|
||||||
|
Age: 0,
|
||||||
|
AgePtr: &thirdAgePtr,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Age int64 `db:"age"`
|
||||||
|
AgePtr *int64 `db:"age_ptr"`
|
||||||
|
Name string `db:"name"`
|
||||||
|
NamePtr *string `db:"name_ptr"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name", "name_ptr", "age", "age_ptr"}).
|
||||||
|
AddRow("first", nil, 2, nil).
|
||||||
|
AddRow("second", "second_ptr", 3, 30).
|
||||||
|
AddRow("", "third_ptr", 0, 0)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from users where user=?").
|
||||||
|
WithArgs("anyone").WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, name_ptr, age, age_ptr from users where user=?", "anyone"))
|
||||||
|
|
||||||
|
assert.Equal(t, 3, len(value), "应该返回3行数据")
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
|
||||||
|
assert.Equal(t, each.Name, value[i].Name)
|
||||||
|
assert.Equal(t, each.Age, value[i].Age)
|
||||||
|
|
||||||
|
if each.NamePtr == nil {
|
||||||
|
assert.Nil(t, value[i].NamePtr)
|
||||||
|
} else {
|
||||||
|
assert.NotNil(t, value[i].NamePtr)
|
||||||
|
assert.Equal(t, *each.NamePtr, *value[i].NamePtr)
|
||||||
|
}
|
||||||
|
|
||||||
|
if each.AgePtr == nil {
|
||||||
|
assert.Nil(t, value[i].AgePtr)
|
||||||
|
} else {
|
||||||
|
assert.NotNil(t, value[i].AgePtr)
|
||||||
|
assert.Equal(t, *each.AgePtr, *value[i].AgePtr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsAllNullStructPtrFields(t *testing.T) {
|
||||||
|
expect := []struct {
|
||||||
|
NamePtr *string
|
||||||
|
AgePtr *int64
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
NamePtr: nil,
|
||||||
|
AgePtr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
NamePtr: stringPtr("second"),
|
||||||
|
AgePtr: int64Ptr(30),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
NamePtr: nil,
|
||||||
|
AgePtr: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
AgePtr *int64 `db:"age_ptr"`
|
||||||
|
NamePtr *string `db:"name_ptr"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name_ptr", "age_ptr"}).
|
||||||
|
AddRow(nil, nil).
|
||||||
|
AddRow("second", 30).
|
||||||
|
AddRow(nil, nil)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from users where user=?").
|
||||||
|
WithArgs("anyone").WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name_ptr, age_ptr from users where user=?", "anyone"))
|
||||||
|
|
||||||
|
assert.Equal(t, 3, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
if each.NamePtr == nil {
|
||||||
|
assert.Nil(t, value[i].NamePtr)
|
||||||
|
} else {
|
||||||
|
assert.NotNil(t, value[i].NamePtr)
|
||||||
|
assert.Equal(t, *each.NamePtr, *value[i].NamePtr)
|
||||||
|
}
|
||||||
|
|
||||||
|
if each.AgePtr == nil {
|
||||||
|
assert.Nil(t, value[i].AgePtr)
|
||||||
|
} else {
|
||||||
|
assert.NotNil(t, value[i].AgePtr)
|
||||||
|
assert.Equal(t, *each.AgePtr, *value[i].AgePtr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsWithSqlNullTypes(t *testing.T) {
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
NullName sql.NullString
|
||||||
|
Age int64
|
||||||
|
NullAge sql.NullInt64
|
||||||
|
Score float64
|
||||||
|
NullScore sql.NullFloat64
|
||||||
|
Active bool
|
||||||
|
NullActive sql.NullBool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "first",
|
||||||
|
NullName: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Age: 20,
|
||||||
|
NullAge: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Score: 85.5,
|
||||||
|
NullScore: sql.NullFloat64{
|
||||||
|
Float64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Active: true,
|
||||||
|
NullActive: sql.NullBool{
|
||||||
|
Bool: false,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "second",
|
||||||
|
NullName: sql.NullString{
|
||||||
|
String: "not_null_name",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
Age: 25,
|
||||||
|
NullAge: sql.NullInt64{
|
||||||
|
Int64: 30,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
Score: 90.0,
|
||||||
|
NullScore: sql.NullFloat64{
|
||||||
|
Float64: 95.5,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
Active: false,
|
||||||
|
NullActive: sql.NullBool{
|
||||||
|
Bool: true,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "third",
|
||||||
|
NullName: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Age: 0,
|
||||||
|
NullAge: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Score: 0,
|
||||||
|
NullScore: sql.NullFloat64{
|
||||||
|
Float64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Active: false,
|
||||||
|
NullActive: sql.NullBool{
|
||||||
|
Bool: false,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Name string `db:"name"`
|
||||||
|
NullName sql.NullString `db:"null_name"`
|
||||||
|
Age int64 `db:"age"`
|
||||||
|
NullAge sql.NullInt64 `db:"null_age"`
|
||||||
|
Score float64 `db:"score"`
|
||||||
|
NullScore sql.NullFloat64 `db:"null_score"`
|
||||||
|
Active bool `db:"active"`
|
||||||
|
NullActive sql.NullBool `db:"null_active"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{
|
||||||
|
"name", "null_name", "age", "null_age", "score", "null_score", "active", "null_active",
|
||||||
|
}).
|
||||||
|
AddRow("first", nil, 20, nil, 85.5, nil, true, nil).
|
||||||
|
AddRow("second", "not_null_name", 25, 30, 90.0, 95.5, false, true).
|
||||||
|
AddRow("third", nil, 0, nil, 0, nil, false, nil)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from users where type=?").
|
||||||
|
WithArgs("test").WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, null_name, age, null_age, score, null_score, active, null_active from users where type=?", "test"))
|
||||||
|
|
||||||
|
assert.Equal(t, 3, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
assert.Equal(t, each.Name, value[i].Name)
|
||||||
|
assert.Equal(t, each.Age, value[i].Age)
|
||||||
|
assert.Equal(t, each.Score, value[i].Score)
|
||||||
|
assert.Equal(t, each.Active, value[i].Active)
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullName.Valid, value[i].NullName.Valid)
|
||||||
|
if each.NullName.Valid {
|
||||||
|
assert.Equal(t, each.NullName.String, value[i].NullName.String)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullAge.Valid, value[i].NullAge.Valid)
|
||||||
|
if each.NullAge.Valid {
|
||||||
|
assert.Equal(t, each.NullAge.Int64, value[i].NullAge.Int64)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullScore.Valid, value[i].NullScore.Valid)
|
||||||
|
if each.NullScore.Valid {
|
||||||
|
assert.Equal(t, each.NullScore.Float64, value[i].NullScore.Float64)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullActive.Valid, value[i].NullActive.Valid)
|
||||||
|
if each.NullActive.Valid {
|
||||||
|
assert.Equal(t, each.NullActive.Bool, value[i].NullActive.Bool)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsSqlNullWithMixedData(t *testing.T) {
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
NullName sql.NullString
|
||||||
|
Age int64
|
||||||
|
NullAge sql.NullInt64
|
||||||
|
IsStudent bool
|
||||||
|
NullActive sql.NullBool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "student1",
|
||||||
|
NullName: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
Age: 18,
|
||||||
|
NullAge: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
IsStudent: true,
|
||||||
|
NullActive: sql.NullBool{
|
||||||
|
Bool: false,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "student2",
|
||||||
|
NullName: sql.NullString{
|
||||||
|
String: "has_nickname",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
Age: 20,
|
||||||
|
NullAge: sql.NullInt64{
|
||||||
|
Int64: 22,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
IsStudent: false,
|
||||||
|
NullActive: sql.NullBool{
|
||||||
|
Bool: true,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Name string `db:"name"`
|
||||||
|
NullName sql.NullString `db:"null_name"`
|
||||||
|
Age int64 `db:"age"`
|
||||||
|
NullAge sql.NullInt64 `db:"null_age"`
|
||||||
|
IsStudent bool `db:"is_student"`
|
||||||
|
NullActive sql.NullBool `db:"null_active"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name", "null_name", "age", "null_age", "is_student", "null_active"}).
|
||||||
|
AddRow("student1", nil, 18, nil, true, nil).
|
||||||
|
AddRow("student2", "has_nickname", 20, 22, false, true)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from students where class=?").
|
||||||
|
WithArgs("A").WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, null_name, age, null_age, is_student, null_active from students where class=?", "A"))
|
||||||
|
|
||||||
|
assert.Equal(t, 2, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
assert.Equal(t, each.Name, value[i].Name)
|
||||||
|
assert.Equal(t, each.Age, value[i].Age)
|
||||||
|
assert.Equal(t, each.IsStudent, value[i].IsStudent)
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullName.Valid, value[i].NullName.Valid)
|
||||||
|
if each.NullName.Valid {
|
||||||
|
assert.Equal(t, each.NullName.String, value[i].NullName.String)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullAge.Valid, value[i].NullAge.Valid)
|
||||||
|
if each.NullAge.Valid {
|
||||||
|
assert.Equal(t, each.NullAge.Int64, value[i].NullAge.Int64)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullActive.Valid, value[i].NullActive.Valid)
|
||||||
|
if each.NullActive.Valid {
|
||||||
|
assert.Equal(t, each.NullActive.Bool, value[i].NullActive.Bool)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsSqlNullTime(t *testing.T) {
|
||||||
|
now := time.Now()
|
||||||
|
futureTime := now.AddDate(1, 0, 0)
|
||||||
|
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
BirthDate sql.NullTime
|
||||||
|
LastLogin sql.NullTime
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "user1",
|
||||||
|
BirthDate: sql.NullTime{
|
||||||
|
Time: time.Time{},
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
LastLogin: sql.NullTime{
|
||||||
|
Time: now,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "user2",
|
||||||
|
BirthDate: sql.NullTime{
|
||||||
|
Time: futureTime,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
LastLogin: sql.NullTime{
|
||||||
|
Time: time.Time{},
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Name string `db:"name"`
|
||||||
|
BirthDate sql.NullTime `db:"birth_date"`
|
||||||
|
LastLogin sql.NullTime `db:"last_login"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name", "birth_date", "last_login"}).
|
||||||
|
AddRow("user1", nil, now).
|
||||||
|
AddRow("user2", futureTime, nil)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from users").
|
||||||
|
WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, birth_date, last_login from users"))
|
||||||
|
|
||||||
|
assert.Equal(t, 2, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
assert.Equal(t, each.Name, value[i].Name)
|
||||||
|
|
||||||
|
assert.Equal(t, each.BirthDate.Valid, value[i].BirthDate.Valid)
|
||||||
|
if each.BirthDate.Valid {
|
||||||
|
assert.WithinDuration(t, each.BirthDate.Time, value[i].BirthDate.Time, time.Second)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.LastLogin.Valid, value[i].LastLogin.Valid)
|
||||||
|
if each.LastLogin.Valid {
|
||||||
|
assert.WithinDuration(t, each.LastLogin.Time, value[i].LastLogin.Time, time.Second)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsSqlNullWithEmptyValues(t *testing.T) {
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
NullString sql.NullString
|
||||||
|
NullInt sql.NullInt64
|
||||||
|
NullFloat sql.NullFloat64
|
||||||
|
NullBool sql.NullBool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "empty_values",
|
||||||
|
NullString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullInt: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullFloat: sql.NullFloat64{
|
||||||
|
Float64: 0.0,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullBool: sql.NullBool{
|
||||||
|
Bool: false,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "null_values",
|
||||||
|
NullString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NullInt: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NullFloat: sql.NullFloat64{
|
||||||
|
Float64: 0.0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NullBool: sql.NullBool{
|
||||||
|
Bool: false,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "mixed_values",
|
||||||
|
NullString: sql.NullString{
|
||||||
|
String: "actual_value",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullInt: sql.NullInt64{
|
||||||
|
Int64: 0,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullFloat: sql.NullFloat64{
|
||||||
|
Float64: 0.0,
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NullBool: sql.NullBool{
|
||||||
|
Bool: true,
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Name string `db:"name"`
|
||||||
|
NullString sql.NullString `db:"null_string"`
|
||||||
|
NullInt sql.NullInt64 `db:"null_int"`
|
||||||
|
NullFloat sql.NullFloat64 `db:"null_float"`
|
||||||
|
NullBool sql.NullBool `db:"null_bool"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name", "null_string", "null_int", "null_float", "null_bool"}).
|
||||||
|
AddRow("empty_values", "", 0, 0.0, false).
|
||||||
|
AddRow("null_values", nil, nil, nil, nil).
|
||||||
|
AddRow("mixed_values", "actual_value", 0, nil, true)
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from test_table").
|
||||||
|
WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, null_string, null_int, null_float, null_bool from test_table"))
|
||||||
|
|
||||||
|
assert.Equal(t, 3, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
|
||||||
|
assert.Equal(t, each.Name, value[i].Name)
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullString.Valid, value[i].NullString.Valid)
|
||||||
|
if each.NullString.Valid {
|
||||||
|
assert.Equal(t, each.NullString.String, value[i].NullString.String)
|
||||||
|
} else {
|
||||||
|
assert.Equal(t, "", value[i].NullString.String)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullInt.Valid, value[i].NullInt.Valid)
|
||||||
|
if each.NullInt.Valid {
|
||||||
|
assert.Equal(t, each.NullInt.Int64, value[i].NullInt.Int64)
|
||||||
|
} else {
|
||||||
|
assert.Equal(t, int64(0), value[i].NullInt.Int64)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullFloat.Valid, value[i].NullFloat.Valid)
|
||||||
|
if each.NullFloat.Valid {
|
||||||
|
assert.Equal(t, each.NullFloat.Float64, value[i].NullFloat.Float64)
|
||||||
|
} else {
|
||||||
|
assert.Equal(t, 0.0, value[i].NullFloat.Float64)
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, each.NullBool.Valid, value[i].NullBool.Valid)
|
||||||
|
if each.NullBool.Valid {
|
||||||
|
assert.Equal(t, each.NullBool.Bool, value[i].NullBool.Bool)
|
||||||
|
} else {
|
||||||
|
assert.Equal(t, false, value[i].NullBool.Bool)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnmarshalRowsSqlNullStringEmptyVsNull(t *testing.T) {
|
||||||
|
expect := []struct {
|
||||||
|
Name string
|
||||||
|
EmptyString sql.NullString
|
||||||
|
NullString sql.NullString
|
||||||
|
NormalString sql.NullString
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
Name: "row1",
|
||||||
|
EmptyString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NormalString: sql.NullString{
|
||||||
|
String: "hello",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "row2",
|
||||||
|
EmptyString: sql.NullString{
|
||||||
|
String: " ",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
NullString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: false,
|
||||||
|
},
|
||||||
|
NormalString: sql.NullString{
|
||||||
|
String: "",
|
||||||
|
Valid: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var value []struct {
|
||||||
|
Name string `db:"name"`
|
||||||
|
EmptyString sql.NullString `db:"empty_string"`
|
||||||
|
NullString sql.NullString `db:"null_string"`
|
||||||
|
NormalString sql.NullString `db:"normal_string"`
|
||||||
|
}
|
||||||
|
|
||||||
|
dbtest.RunTest(t, func(db *sql.DB, mock sqlmock.Sqlmock) {
|
||||||
|
rs := sqlmock.NewRows([]string{"name", "empty_string", "null_string", "normal_string"}).
|
||||||
|
AddRow("row1", "", nil, "hello").
|
||||||
|
AddRow("row2", " ", nil, "")
|
||||||
|
|
||||||
|
mock.ExpectQuery("select (.+) from string_test").
|
||||||
|
WillReturnRows(rs)
|
||||||
|
|
||||||
|
assert.Nil(t, query(context.Background(), db, func(rows *sql.Rows) error {
|
||||||
|
return unmarshalRows(&value, rows, true)
|
||||||
|
}, "select name, empty_string, null_string, normal_string from string_test"))
|
||||||
|
|
||||||
|
assert.Equal(t, 2, len(value))
|
||||||
|
|
||||||
|
for i, each := range expect {
|
||||||
|
assert.True(t, value[i].EmptyString.Valid)
|
||||||
|
assert.Equal(t, each.EmptyString.String, value[i].EmptyString.String)
|
||||||
|
|
||||||
|
assert.False(t, value[i].NullString.Valid)
|
||||||
|
assert.Equal(t, "", value[i].NullString.String)
|
||||||
|
|
||||||
|
assert.Equal(t, each.NormalString.Valid, value[i].NormalString.Valid)
|
||||||
|
if each.NormalString.Valid {
|
||||||
|
assert.Equal(t, each.NormalString.String, value[i].NormalString.String)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetValueInterface(t *testing.T) {
|
||||||
|
t.Run("non_pointer_field", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
Name string
|
||||||
|
Age int
|
||||||
|
}
|
||||||
|
s := testStruct{}
|
||||||
|
v := reflect.ValueOf(&s).Elem()
|
||||||
|
|
||||||
|
nameField := v.Field(0)
|
||||||
|
result, err := getValueInterface(nameField)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, result)
|
||||||
|
|
||||||
|
// Should return pointer to the field
|
||||||
|
ptr, ok := result.(*string)
|
||||||
|
assert.True(t, ok)
|
||||||
|
*ptr = "test"
|
||||||
|
assert.Equal(t, "test", s.Name)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("pointer_field_nil", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
NamePtr *string
|
||||||
|
AgePtr *int64
|
||||||
|
}
|
||||||
|
s := testStruct{}
|
||||||
|
v := reflect.ValueOf(&s).Elem()
|
||||||
|
|
||||||
|
// Test with nil pointer field
|
||||||
|
namePtrField := v.Field(0)
|
||||||
|
assert.True(t, namePtrField.IsNil(), "initial pointer should be nil")
|
||||||
|
|
||||||
|
result, err := getValueInterface(namePtrField)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, result)
|
||||||
|
|
||||||
|
// Should have allocated the pointer
|
||||||
|
assert.False(t, namePtrField.IsNil(), "pointer should be allocated after getValueInterface")
|
||||||
|
|
||||||
|
// Should return pointer to pointer field
|
||||||
|
ptrPtr, ok := result.(**string)
|
||||||
|
assert.True(t, ok)
|
||||||
|
testValue := "initialized"
|
||||||
|
*ptrPtr = &testValue
|
||||||
|
assert.NotNil(t, s.NamePtr)
|
||||||
|
assert.Equal(t, "initialized", *s.NamePtr)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("pointer_field_already_allocated", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
NamePtr *string
|
||||||
|
}
|
||||||
|
initial := "existing"
|
||||||
|
s := testStruct{NamePtr: &initial}
|
||||||
|
v := reflect.ValueOf(&s).Elem()
|
||||||
|
|
||||||
|
namePtrField := v.Field(0)
|
||||||
|
assert.False(t, namePtrField.IsNil(), "pointer should not be nil initially")
|
||||||
|
|
||||||
|
result, err := getValueInterface(namePtrField)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, result)
|
||||||
|
|
||||||
|
// Should return pointer to pointer field
|
||||||
|
ptrPtr, ok := result.(**string)
|
||||||
|
assert.True(t, ok)
|
||||||
|
|
||||||
|
// Verify it points to the existing value
|
||||||
|
assert.Equal(t, "existing", **ptrPtr)
|
||||||
|
|
||||||
|
// Modify through the returned pointer
|
||||||
|
newValue := "modified"
|
||||||
|
*ptrPtr = &newValue
|
||||||
|
assert.Equal(t, "modified", *s.NamePtr)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("pointer_field_zero_value", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
IntPtr *int
|
||||||
|
}
|
||||||
|
s := testStruct{}
|
||||||
|
v := reflect.ValueOf(&s).Elem()
|
||||||
|
|
||||||
|
intPtrField := v.Field(0)
|
||||||
|
result, err := getValueInterface(intPtrField)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
// After calling getValueInterface, nil pointer should be allocated
|
||||||
|
assert.NotNil(t, s.IntPtr)
|
||||||
|
|
||||||
|
// Set zero value through returned interface
|
||||||
|
ptrPtr, ok := result.(**int)
|
||||||
|
assert.True(t, ok)
|
||||||
|
zero := 0
|
||||||
|
*ptrPtr = &zero
|
||||||
|
assert.Equal(t, 0, *s.IntPtr)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("not_addressable_value", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
s := testStruct{Name: "test"}
|
||||||
|
v := reflect.ValueOf(s) // Non-pointer, not addressable
|
||||||
|
|
||||||
|
nameField := v.Field(0)
|
||||||
|
result, err := getValueInterface(nameField)
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Equal(t, ErrNotReadableValue, err)
|
||||||
|
assert.Nil(t, result)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("multiple_pointer_types", func(t *testing.T) {
|
||||||
|
type testStruct struct {
|
||||||
|
StringPtr *string
|
||||||
|
IntPtr *int
|
||||||
|
Int64Ptr *int64
|
||||||
|
FloatPtr *float64
|
||||||
|
BoolPtr *bool
|
||||||
|
}
|
||||||
|
s := testStruct{}
|
||||||
|
v := reflect.ValueOf(&s).Elem()
|
||||||
|
|
||||||
|
// Test each pointer type gets properly initialized
|
||||||
|
for i := 0; i < v.NumField(); i++ {
|
||||||
|
field := v.Field(i)
|
||||||
|
assert.True(t, field.IsNil(), "field %d should start as nil", i)
|
||||||
|
|
||||||
|
result, err := getValueInterface(field)
|
||||||
|
assert.NoError(t, err, "field %d should not error", i)
|
||||||
|
assert.NotNil(t, result, "field %d result should not be nil", i)
|
||||||
|
|
||||||
|
// After getValueInterface, pointer should be allocated
|
||||||
|
assert.False(t, field.IsNil(), "field %d should be allocated", i)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func stringPtr(s string) *string {
|
||||||
|
return &s
|
||||||
|
}
|
||||||
|
|
||||||
|
func int64Ptr(i int64) *int64 {
|
||||||
|
return &i
|
||||||
|
}
|
||||||
|
|
||||||
func BenchmarkIgnore(b *testing.B) {
|
func BenchmarkIgnore(b *testing.B) {
|
||||||
db, mock, err := sqlmock.New()
|
db, mock, err := sqlmock.New()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package stringx
|
|||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"slices"
|
"slices"
|
||||||
|
"strings"
|
||||||
"unicode"
|
"unicode"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/lang"
|
"github.com/zeromicro/go-zero/core/lang"
|
||||||
@@ -21,20 +22,14 @@ func Contains(list []string, str string) bool {
|
|||||||
return slices.Contains(list, str)
|
return slices.Contains(list, str)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter filters chars from s with given filter function.
|
// Filter filters chars from s with given remove function.
|
||||||
func Filter(s string, filter func(r rune) bool) string {
|
func Filter(s string, remove func(r rune) bool) string {
|
||||||
var n int
|
return strings.Map(func(r rune) rune {
|
||||||
chars := []rune(s)
|
if remove(r) {
|
||||||
for i, x := range chars {
|
return -1
|
||||||
if n < i {
|
|
||||||
chars[n] = x
|
|
||||||
}
|
}
|
||||||
if !filter(x) {
|
return r
|
||||||
n++
|
}, s)
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return string(chars[:n])
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// FirstN returns first n runes from s.
|
// FirstN returns first n runes from s.
|
||||||
@@ -141,6 +136,7 @@ func Substr(str string, start, stop int) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TakeOne returns valid string if not empty or later one.
|
// TakeOne returns valid string if not empty or later one.
|
||||||
|
// Deprecated: use cmp.Or instead.
|
||||||
func TakeOne(valid, or string) string {
|
func TakeOne(valid, or string) string {
|
||||||
if len(valid) > 0 {
|
if len(valid) > 0 {
|
||||||
return valid
|
return valid
|
||||||
|
|||||||
@@ -29,6 +29,40 @@ func TestContainsString(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHasEmpty(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
args []string
|
||||||
|
expect bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
args: []string{"a", "b", "c"},
|
||||||
|
expect: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: []string{"a", "", "c"},
|
||||||
|
expect: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: []string{""},
|
||||||
|
expect: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: []string{},
|
||||||
|
expect: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: nil,
|
||||||
|
expect: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, each := range cases {
|
||||||
|
t.Run(path.Join(each.args...), func(t *testing.T) {
|
||||||
|
assert.Equal(t, each.expect, HasEmpty(each.args...))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestNotEmpty(t *testing.T) {
|
func TestNotEmpty(t *testing.T) {
|
||||||
cases := []struct {
|
cases := []struct {
|
||||||
args []string
|
args []string
|
||||||
@@ -92,6 +126,24 @@ func TestFilter(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func BenchmarkFilter(b *testing.B) {
|
||||||
|
b.Run("true", func(b *testing.B) {
|
||||||
|
b.ResetTimer()
|
||||||
|
b.ReportAllocs()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
Filter(`ab,cd,ef`, func(r rune) bool { return r == ',' })
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
b.Run("false", func(b *testing.B) {
|
||||||
|
b.ResetTimer()
|
||||||
|
b.ReportAllocs()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
Filter(`ab,cd,ef`, func(r rune) bool { return r == '!' })
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func TestFirstN(t *testing.T) {
|
func TestFirstN(t *testing.T) {
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
|||||||
@@ -1,13 +1,12 @@
|
|||||||
package threading
|
package threading
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/zeromicro/go-zero/core/logx/logtest"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestRoutineGroupRun(t *testing.T) {
|
func TestRoutineGroupRun(t *testing.T) {
|
||||||
@@ -25,7 +24,7 @@ func TestRoutineGroupRun(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestRoutingGroupRunSafe(t *testing.T) {
|
func TestRoutingGroupRunSafe(t *testing.T) {
|
||||||
log.SetOutput(io.Discard)
|
logtest.Discard(t)
|
||||||
|
|
||||||
var count int32
|
var count int32
|
||||||
group := NewRoutineGroup()
|
group := NewRoutineGroup()
|
||||||
|
|||||||
@@ -3,13 +3,12 @@ package threading
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/zeromicro/go-zero/core/lang"
|
"github.com/zeromicro/go-zero/core/lang"
|
||||||
"github.com/zeromicro/go-zero/core/logx"
|
"github.com/zeromicro/go-zero/core/logx"
|
||||||
|
"github.com/zeromicro/go-zero/core/logx/logtest"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestRoutineId(t *testing.T) {
|
func TestRoutineId(t *testing.T) {
|
||||||
@@ -17,7 +16,7 @@ func TestRoutineId(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestRunSafe(t *testing.T) {
|
func TestRunSafe(t *testing.T) {
|
||||||
log.SetOutput(io.Discard)
|
logtest.Discard(t)
|
||||||
|
|
||||||
i := 0
|
i := 0
|
||||||
|
|
||||||
|
|||||||
@@ -3,14 +3,11 @@ package trace
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/url"
|
|
||||||
"os"
|
"os"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/lang"
|
|
||||||
"github.com/zeromicro/go-zero/core/logx"
|
"github.com/zeromicro/go-zero/core/logx"
|
||||||
"go.opentelemetry.io/otel"
|
"go.opentelemetry.io/otel"
|
||||||
"go.opentelemetry.io/otel/exporters/jaeger"
|
|
||||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
|
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
|
||||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
|
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
|
||||||
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
|
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
|
||||||
@@ -21,63 +18,47 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
kindJaeger = "jaeger"
|
|
||||||
kindZipkin = "zipkin"
|
kindZipkin = "zipkin"
|
||||||
kindOtlpGrpc = "otlpgrpc"
|
kindOtlpGrpc = "otlpgrpc"
|
||||||
kindOtlpHttp = "otlphttp"
|
kindOtlpHttp = "otlphttp"
|
||||||
kindFile = "file"
|
kindFile = "file"
|
||||||
protocolUdp = "udp"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
agents = make(map[string]lang.PlaceholderType)
|
once sync.Once
|
||||||
lock sync.Mutex
|
tp *sdktrace.TracerProvider
|
||||||
tp *sdktrace.TracerProvider
|
shutdownOnceFn = sync.OnceFunc(func() {
|
||||||
|
if tp != nil {
|
||||||
|
_ = tp.Shutdown(context.Background())
|
||||||
|
}
|
||||||
|
})
|
||||||
)
|
)
|
||||||
|
|
||||||
// StartAgent starts an opentelemetry agent.
|
// StartAgent starts an opentelemetry agent.
|
||||||
|
// It uses sync.Once to ensure the agent is initialized only once,
|
||||||
|
// similar to prometheus.StartAgent and logx.SetUp.
|
||||||
|
// This prevents multiple ServiceConf.SetUp() calls from reinitializing
|
||||||
|
// the global tracer provider when running multiple servers (e.g., REST + RPC)
|
||||||
|
// in the same process.
|
||||||
func StartAgent(c Config) {
|
func StartAgent(c Config) {
|
||||||
if c.Disabled {
|
if c.Disabled {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
lock.Lock()
|
once.Do(func() {
|
||||||
defer lock.Unlock()
|
if err := startAgent(c); err != nil {
|
||||||
|
logx.Error(err)
|
||||||
_, ok := agents[c.Endpoint]
|
}
|
||||||
if ok {
|
})
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// if error happens, let later calls run.
|
|
||||||
if err := startAgent(c); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
agents[c.Endpoint] = lang.Placeholder
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// StopAgent shuts down the span processors in the order they were registered.
|
// StopAgent shuts down the span processors in the order they were registered.
|
||||||
func StopAgent() {
|
func StopAgent() {
|
||||||
lock.Lock()
|
shutdownOnceFn()
|
||||||
defer lock.Unlock()
|
|
||||||
|
|
||||||
if tp != nil {
|
|
||||||
_ = tp.Shutdown(context.Background())
|
|
||||||
tp = nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func createExporter(c Config) (sdktrace.SpanExporter, error) {
|
func createExporter(c Config) (sdktrace.SpanExporter, error) {
|
||||||
// Just support jaeger and zipkin now, more for later
|
|
||||||
switch c.Batcher {
|
switch c.Batcher {
|
||||||
case kindJaeger:
|
|
||||||
u, err := url.Parse(c.Endpoint)
|
|
||||||
if err == nil && u.Scheme == protocolUdp {
|
|
||||||
return jaeger.New(jaeger.WithAgentEndpoint(jaeger.WithAgentHost(u.Hostname()),
|
|
||||||
jaeger.WithAgentPort(u.Port())))
|
|
||||||
}
|
|
||||||
return jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(c.Endpoint)))
|
|
||||||
case kindZipkin:
|
case kindZipkin:
|
||||||
return zipkin.New(c.Endpoint)
|
return zipkin.New(c.Endpoint)
|
||||||
case kindOtlpGrpc:
|
case kindOtlpGrpc:
|
||||||
|
|||||||
@@ -1,10 +1,13 @@
|
|||||||
package trace
|
package trace
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/zeromicro/go-zero/core/logx"
|
"github.com/zeromicro/go-zero/core/logx"
|
||||||
|
"go.opentelemetry.io/otel"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestStartAgent(t *testing.T) {
|
func TestStartAgent(t *testing.T) {
|
||||||
@@ -24,21 +27,16 @@ func TestStartAgent(t *testing.T) {
|
|||||||
Name: "foo",
|
Name: "foo",
|
||||||
}
|
}
|
||||||
c2 := Config{
|
c2 := Config{
|
||||||
Name: "bar",
|
|
||||||
Endpoint: endpoint1,
|
|
||||||
Batcher: kindJaeger,
|
|
||||||
}
|
|
||||||
c3 := Config{
|
|
||||||
Name: "any",
|
Name: "any",
|
||||||
Endpoint: endpoint2,
|
Endpoint: endpoint2,
|
||||||
Batcher: kindZipkin,
|
Batcher: kindZipkin,
|
||||||
}
|
}
|
||||||
c4 := Config{
|
c3 := Config{
|
||||||
Name: "bla",
|
Name: "bla",
|
||||||
Endpoint: endpoint3,
|
Endpoint: endpoint3,
|
||||||
Batcher: "otlp",
|
Batcher: "otlp",
|
||||||
}
|
}
|
||||||
c5 := Config{
|
c4 := Config{
|
||||||
Name: "otlpgrpc",
|
Name: "otlpgrpc",
|
||||||
Endpoint: endpoint3,
|
Endpoint: endpoint3,
|
||||||
Batcher: kindOtlpGrpc,
|
Batcher: kindOtlpGrpc,
|
||||||
@@ -46,7 +44,7 @@ func TestStartAgent(t *testing.T) {
|
|||||||
"uptrace-dsn": "http://project2_secret_token@localhost:14317/2",
|
"uptrace-dsn": "http://project2_secret_token@localhost:14317/2",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
c6 := Config{
|
c5 := Config{
|
||||||
Name: "otlphttp",
|
Name: "otlphttp",
|
||||||
Endpoint: endpoint4,
|
Endpoint: endpoint4,
|
||||||
Batcher: kindOtlpHttp,
|
Batcher: kindOtlpHttp,
|
||||||
@@ -55,22 +53,12 @@ func TestStartAgent(t *testing.T) {
|
|||||||
},
|
},
|
||||||
OtlpHttpPath: "/v1/traces",
|
OtlpHttpPath: "/v1/traces",
|
||||||
}
|
}
|
||||||
c7 := Config{
|
c6 := Config{
|
||||||
Name: "UDP",
|
|
||||||
Endpoint: endpoint5,
|
|
||||||
Batcher: kindJaeger,
|
|
||||||
}
|
|
||||||
c8 := Config{
|
|
||||||
Disabled: true,
|
|
||||||
Endpoint: endpoint6,
|
|
||||||
Batcher: kindJaeger,
|
|
||||||
}
|
|
||||||
c9 := Config{
|
|
||||||
Name: "file",
|
Name: "file",
|
||||||
Endpoint: endpoint71,
|
Endpoint: endpoint71,
|
||||||
Batcher: kindFile,
|
Batcher: kindFile,
|
||||||
}
|
}
|
||||||
c10 := Config{
|
c7 := Config{
|
||||||
Name: "file",
|
Name: "file",
|
||||||
Endpoint: endpoint72,
|
Endpoint: endpoint72,
|
||||||
Batcher: kindFile,
|
Batcher: kindFile,
|
||||||
@@ -84,28 +72,289 @@ func TestStartAgent(t *testing.T) {
|
|||||||
StartAgent(c5)
|
StartAgent(c5)
|
||||||
StartAgent(c6)
|
StartAgent(c6)
|
||||||
StartAgent(c7)
|
StartAgent(c7)
|
||||||
StartAgent(c8)
|
|
||||||
StartAgent(c9)
|
|
||||||
StartAgent(c10)
|
|
||||||
defer StopAgent()
|
defer StopAgent()
|
||||||
|
|
||||||
lock.Lock()
|
// With sync.Once, only the first non-disabled config (c1) takes effect.
|
||||||
defer lock.Unlock()
|
// Subsequent calls are ignored, which is the desired behavior to prevent
|
||||||
|
// multiple servers (REST + RPC) from reinitializing the global tracer.
|
||||||
// because remotehost cannot be resolved
|
assert.NotNil(t, tp)
|
||||||
assert.Equal(t, 6, len(agents))
|
}
|
||||||
_, ok := agents[""]
|
|
||||||
assert.True(t, ok)
|
func TestCreateExporter_InvalidFilePath(t *testing.T) {
|
||||||
_, ok = agents[endpoint1]
|
logx.Disable()
|
||||||
assert.True(t, ok)
|
|
||||||
_, ok = agents[endpoint2]
|
c := Config{
|
||||||
assert.False(t, ok)
|
Name: "test-invalid-file",
|
||||||
_, ok = agents[endpoint5]
|
Endpoint: "/non-existent-directory/trace.log",
|
||||||
assert.True(t, ok)
|
Batcher: kindFile,
|
||||||
_, ok = agents[endpoint6]
|
}
|
||||||
assert.False(t, ok)
|
|
||||||
_, ok = agents[endpoint71]
|
_, err := createExporter(c)
|
||||||
assert.True(t, ok)
|
assert.Error(t, err)
|
||||||
_, ok = agents[endpoint72]
|
assert.Contains(t, err.Error(), "file exporter endpoint error")
|
||||||
assert.False(t, ok)
|
}
|
||||||
|
|
||||||
|
func TestCreateExporter_UnknownBatcher(t *testing.T) {
|
||||||
|
logx.Disable()
|
||||||
|
|
||||||
|
c := Config{
|
||||||
|
Name: "test-unknown",
|
||||||
|
Endpoint: "localhost:1234",
|
||||||
|
Batcher: "unknown-batcher-type",
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := createExporter(c)
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "unknown exporter")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateExporter_ValidExporters(t *testing.T) {
|
||||||
|
logx.Disable()
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
config Config
|
||||||
|
wantErr bool
|
||||||
|
errMsg string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "valid file exporter",
|
||||||
|
config: Config{
|
||||||
|
Name: "file-test",
|
||||||
|
Endpoint: "/tmp/trace-test.log",
|
||||||
|
Batcher: kindFile,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "invalid file path",
|
||||||
|
config: Config{
|
||||||
|
Name: "file-test-invalid",
|
||||||
|
Endpoint: "/invalid-path/that/does/not/exist/trace.log",
|
||||||
|
Batcher: kindFile,
|
||||||
|
},
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "file exporter endpoint error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unknown batcher",
|
||||||
|
config: Config{
|
||||||
|
Name: "unknown-test",
|
||||||
|
Endpoint: "localhost:1234",
|
||||||
|
Batcher: "invalid-batcher",
|
||||||
|
},
|
||||||
|
wantErr: true,
|
||||||
|
errMsg: "unknown exporter",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "zipkin",
|
||||||
|
config: Config{
|
||||||
|
Name: "zipkin",
|
||||||
|
Endpoint: "http://localhost:9411/api/v2/spans",
|
||||||
|
Batcher: kindZipkin,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlpgrpc",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlpgrpc",
|
||||||
|
Endpoint: "localhost:4317",
|
||||||
|
Batcher: kindOtlpGrpc,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlpgrpc with headers",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlpgrpc-headers",
|
||||||
|
Endpoint: "localhost:4317",
|
||||||
|
Batcher: kindOtlpGrpc,
|
||||||
|
OtlpHeaders: map[string]string{
|
||||||
|
"authorization": "Bearer token123",
|
||||||
|
"x-custom-key": "custom-value",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlphttp",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlphttp",
|
||||||
|
Endpoint: "localhost:4318",
|
||||||
|
Batcher: kindOtlpHttp,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlphttp with headers",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlphttp-headers",
|
||||||
|
Endpoint: "localhost:4318",
|
||||||
|
Batcher: kindOtlpHttp,
|
||||||
|
OtlpHeaders: map[string]string{
|
||||||
|
"authorization": "Bearer token456",
|
||||||
|
"x-api-key": "api-key-value",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlphttp with headers and path",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlphttp-headers-path",
|
||||||
|
Endpoint: "localhost:4318",
|
||||||
|
Batcher: kindOtlpHttp,
|
||||||
|
OtlpHttpPath: "/v1/traces",
|
||||||
|
OtlpHeaders: map[string]string{
|
||||||
|
"authorization": "Bearer token789",
|
||||||
|
"x-custom-trace": "trace-id",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "otlphttp with secure connection",
|
||||||
|
config: Config{
|
||||||
|
Name: "otlphttp-secure",
|
||||||
|
Endpoint: "localhost:4318",
|
||||||
|
Batcher: kindOtlpHttp,
|
||||||
|
OtlpHttpSecure: true,
|
||||||
|
OtlpHeaders: map[string]string{
|
||||||
|
"authorization": "Bearer secure-token",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
exporter, err := createExporter(tt.config)
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
if tt.errMsg != "" {
|
||||||
|
assert.Contains(t, err.Error(), tt.errMsg)
|
||||||
|
}
|
||||||
|
assert.Nil(t, exporter)
|
||||||
|
} else {
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, exporter)
|
||||||
|
// Clean up the exporter
|
||||||
|
if exporter != nil {
|
||||||
|
_ = exporter.Shutdown(context.Background())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestStopAgent(t *testing.T) {
|
||||||
|
logx.Disable()
|
||||||
|
|
||||||
|
// StopAgent should be idempotent and safe to call multiple times
|
||||||
|
assert.NotPanics(t, func() {
|
||||||
|
StopAgent()
|
||||||
|
StopAgent()
|
||||||
|
StopAgent()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestStartAgent_WithEndpoint(t *testing.T) {
|
||||||
|
logx.Disable()
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
config Config
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "empty endpoint - no exporter created",
|
||||||
|
config: Config{
|
||||||
|
Name: "test-no-endpoint",
|
||||||
|
Sampler: 1.0,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "valid endpoint with file exporter",
|
||||||
|
config: Config{
|
||||||
|
Name: "test-with-endpoint",
|
||||||
|
Endpoint: "/tmp/test-trace.log",
|
||||||
|
Batcher: kindFile,
|
||||||
|
Sampler: 1.0,
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "endpoint with invalid exporter type",
|
||||||
|
config: Config{
|
||||||
|
Name: "test-invalid-batcher",
|
||||||
|
Endpoint: "localhost:1234",
|
||||||
|
Batcher: "invalid-type",
|
||||||
|
Sampler: 1.0,
|
||||||
|
},
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "endpoint with invalid file path",
|
||||||
|
config: Config{
|
||||||
|
Name: "test-invalid-path",
|
||||||
|
Endpoint: "/non/existent/path/trace.log",
|
||||||
|
Batcher: kindFile,
|
||||||
|
Sampler: 1.0,
|
||||||
|
},
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
// Reset tp for each test
|
||||||
|
originalTp := tp
|
||||||
|
tp = nil
|
||||||
|
defer func() {
|
||||||
|
if tp != nil {
|
||||||
|
_ = tp.Shutdown(context.Background())
|
||||||
|
}
|
||||||
|
tp = originalTp
|
||||||
|
}()
|
||||||
|
|
||||||
|
err := startAgent(tt.config)
|
||||||
|
if tt.wantErr {
|
||||||
|
assert.Error(t, err)
|
||||||
|
} else {
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, tp, "TracerProvider should be created")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestStartAgent_ErrorHandler(t *testing.T) {
|
||||||
|
// Setup a tracer provider to test error handler
|
||||||
|
originalTp := tp
|
||||||
|
tp = nil
|
||||||
|
defer func() {
|
||||||
|
if tp != nil {
|
||||||
|
_ = tp.Shutdown(context.Background())
|
||||||
|
}
|
||||||
|
tp = originalTp
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Call startAgent to set up the error handler
|
||||||
|
config := Config{
|
||||||
|
Name: "test-error-handler",
|
||||||
|
Sampler: 1.0,
|
||||||
|
}
|
||||||
|
err := startAgent(config)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotNil(t, tp)
|
||||||
|
|
||||||
|
// Verify the error handler was set and can be called without panicking
|
||||||
|
// We test this by calling otel.Handle which will invoke the registered error handler
|
||||||
|
testErr := errors.New("test otel error")
|
||||||
|
assert.NotPanics(t, func() {
|
||||||
|
otel.Handle(testErr)
|
||||||
|
}, "Error handler should handle errors without panicking")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ type Config struct {
|
|||||||
Name string `json:",optional"`
|
Name string `json:",optional"`
|
||||||
Endpoint string `json:",optional"`
|
Endpoint string `json:",optional"`
|
||||||
Sampler float64 `json:",default=1.0"`
|
Sampler float64 `json:",default=1.0"`
|
||||||
Batcher string `json:",default=jaeger,options=jaeger|zipkin|otlpgrpc|otlphttp|file"`
|
Batcher string `json:",default=otlpgrpc,options=zipkin|otlpgrpc|otlphttp|file"`
|
||||||
// OtlpHeaders represents the headers for OTLP gRPC or HTTP transport.
|
// OtlpHeaders represents the headers for OTLP gRPC or HTTP transport.
|
||||||
// For example:
|
// For example:
|
||||||
// uptrace-dsn: 'http://project2_secret_token@localhost:14317/2'
|
// uptrace-dsn: 'http://project2_secret_token@localhost:14317/2'
|
||||||
|
|||||||
@@ -12,6 +12,16 @@ import (
|
|||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// MetadataHeaderPrefix is the http prefix that represents custom metadata
|
||||||
|
// parameters to or from a gRPC call.
|
||||||
|
MetadataHeaderPrefix = "Grpc-Metadata-"
|
||||||
|
|
||||||
|
// MetadataTrailerPrefix is prepended to gRPC metadata as it is converted to
|
||||||
|
// HTTP headers in a response handled by go-zero gateway
|
||||||
|
MetadataTrailerPrefix = "Grpc-Trailer-"
|
||||||
|
)
|
||||||
|
|
||||||
type EventHandler struct {
|
type EventHandler struct {
|
||||||
Status *status.Status
|
Status *status.Status
|
||||||
writer io.Writer
|
writer io.Writer
|
||||||
@@ -31,9 +41,10 @@ func NewEventHandler(writer io.Writer, resolver jsonpb.AnyResolver) *EventHandle
|
|||||||
func (h *EventHandler) OnReceiveHeaders(md metadata.MD) {
|
func (h *EventHandler) OnReceiveHeaders(md metadata.MD) {
|
||||||
w, ok := h.writer.(http.ResponseWriter)
|
w, ok := h.writer.(http.ResponseWriter)
|
||||||
if ok {
|
if ok {
|
||||||
for k, v := range md {
|
for k, vs := range md {
|
||||||
for _, val := range v {
|
header := defaultOutgoingHeaderMatcher(k)
|
||||||
w.Header().Add(k, val)
|
for _, v := range vs {
|
||||||
|
w.Header().Add(header, v)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -48,9 +59,10 @@ func (h *EventHandler) OnReceiveResponse(message proto.Message) {
|
|||||||
func (h *EventHandler) OnReceiveTrailers(status *status.Status, md metadata.MD) {
|
func (h *EventHandler) OnReceiveTrailers(status *status.Status, md metadata.MD) {
|
||||||
w, ok := h.writer.(http.ResponseWriter)
|
w, ok := h.writer.(http.ResponseWriter)
|
||||||
if ok {
|
if ok {
|
||||||
for k, v := range md {
|
for k, vs := range md {
|
||||||
for _, val := range v {
|
header := defaultOutgoingTrailerMatcher(k)
|
||||||
w.Header().Add(k, val)
|
for _, v := range vs {
|
||||||
|
w.Header().Add(header, v)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -63,3 +75,11 @@ func (h *EventHandler) OnResolveMethod(_ *desc.MethodDescriptor) {
|
|||||||
|
|
||||||
func (h *EventHandler) OnSendHeaders(_ metadata.MD) {
|
func (h *EventHandler) OnSendHeaders(_ metadata.MD) {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func defaultOutgoingHeaderMatcher(key string) string {
|
||||||
|
return MetadataHeaderPrefix + key
|
||||||
|
}
|
||||||
|
|
||||||
|
func defaultOutgoingTrailerMatcher(key string) string {
|
||||||
|
return MetadataTrailerPrefix + key
|
||||||
|
}
|
||||||
|
|||||||
@@ -40,8 +40,8 @@ func TestEventHandler_OnReceiveTrailers(t *testing.T) {
|
|||||||
},
|
},
|
||||||
expectedStatus: codes.OK,
|
expectedStatus: codes.OK,
|
||||||
expectedHeader: map[string][]string{
|
expectedHeader: map[string][]string{
|
||||||
"X-Custom-Header": {"value1", "value2"},
|
"Grpc-Trailer-X-Custom-Header": {"value1", "value2"},
|
||||||
"X-Another-Header": {"single-value"},
|
"Grpc-Trailer-X-Another-Header": {"single-value"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -100,9 +100,9 @@ func TestEventHandler_OnReceiveHeaders(t *testing.T) {
|
|||||||
"x-another-header": []string{"single-value"},
|
"x-another-header": []string{"single-value"},
|
||||||
},
|
},
|
||||||
expectedHeader: map[string][]string{
|
expectedHeader: map[string][]string{
|
||||||
"Content-Type": {"application/json"},
|
"Grpc-Metadata-Content-Type": {"application/json"},
|
||||||
"X-Custom-Header": {"value1", "value2"},
|
"Grpc-Metadata-X-Custom-Header": {"value1", "value2"},
|
||||||
"X-Another-Header": {"single-value"},
|
"Grpc-Metadata-X-Another-Header": {"single-value"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -158,7 +158,81 @@ func TestEventHandler_OnReceiveHeaders_MultipleValues(t *testing.T) {
|
|||||||
"x-header-2": []string{"value3"},
|
"x-header-2": []string{"value3"},
|
||||||
})
|
})
|
||||||
|
|
||||||
// Check that headers are accumulated (not overwritten)
|
// Check that headers are accumulated (not overwritten) with proper prefix
|
||||||
assert.Equal(t, []string{"value1", "value2"}, recorder.Header()["X-Header-1"])
|
assert.Equal(t, []string{"value1", "value2"}, recorder.Header()["Grpc-Metadata-X-Header-1"])
|
||||||
assert.Equal(t, []string{"value3"}, recorder.Header()["X-Header-2"])
|
assert.Equal(t, []string{"value3"}, recorder.Header()["Grpc-Metadata-X-Header-2"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventHandler_OnReceiveHeaders_MetadataPrefix(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
metadata metadata.MD
|
||||||
|
expectedHeader map[string][]string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "all metadata headers should be prefixed with Grpc-Metadata-",
|
||||||
|
metadata: metadata.MD{
|
||||||
|
"content-type": []string{"application/grpc"},
|
||||||
|
"x-custom-header": []string{"value1"},
|
||||||
|
"authorization": []string{"Bearer token"},
|
||||||
|
},
|
||||||
|
expectedHeader: map[string][]string{
|
||||||
|
"Grpc-Metadata-Content-Type": {"application/grpc"},
|
||||||
|
"Grpc-Metadata-X-Custom-Header": {"value1"},
|
||||||
|
"Grpc-Metadata-Authorization": {"Bearer token"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "mixed case headers should be prefixed",
|
||||||
|
metadata: metadata.MD{
|
||||||
|
"Content-Type": []string{"APPLICATION/JSON"},
|
||||||
|
"X-Custom-Header": []string{"value1"},
|
||||||
|
},
|
||||||
|
expectedHeader: map[string][]string{
|
||||||
|
"Grpc-Metadata-Content-Type": {"APPLICATION/JSON"},
|
||||||
|
"Grpc-Metadata-X-Custom-Header": {"value1"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "multiple values for same header",
|
||||||
|
metadata: metadata.MD{
|
||||||
|
"x-multi-header": []string{"value1", "value2", "value3"},
|
||||||
|
},
|
||||||
|
expectedHeader: map[string][]string{
|
||||||
|
"Grpc-Metadata-X-Multi-Header": {"value1", "value2", "value3"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty metadata",
|
||||||
|
metadata: metadata.MD{},
|
||||||
|
expectedHeader: map[string][]string{},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
recorder := httptest.NewRecorder()
|
||||||
|
h := NewEventHandler(recorder, nil)
|
||||||
|
|
||||||
|
h.OnReceiveHeaders(tt.metadata)
|
||||||
|
|
||||||
|
// Check that headers are set correctly
|
||||||
|
for key, expectedValues := range tt.expectedHeader {
|
||||||
|
actualValues := recorder.Header()[key]
|
||||||
|
assert.Equal(t, expectedValues, actualValues, "Header %s should match", key)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure no unexpected headers are set
|
||||||
|
for actualKey := range recorder.Header() {
|
||||||
|
found := false
|
||||||
|
for expectedKey := range tt.expectedHeader {
|
||||||
|
if actualKey == expectedKey {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
assert.True(t, found, "Unexpected header found: %s", actualKey)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,16 +11,40 @@ const (
|
|||||||
metadataPrefix = "gateway-"
|
metadataPrefix = "gateway-"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// OpenTelemetry trace propagation headers that need to be forwarded to gRPC metadata.
|
||||||
|
// These headers are used by the W3C Trace Context standard for distributed tracing.
|
||||||
|
var traceHeaders = map[string]bool{
|
||||||
|
"traceparent": true,
|
||||||
|
"tracestate": true,
|
||||||
|
"baggage": true,
|
||||||
|
}
|
||||||
|
|
||||||
// ProcessHeaders builds the headers for the gateway from HTTP headers.
|
// ProcessHeaders builds the headers for the gateway from HTTP headers.
|
||||||
|
// It forwards both custom metadata headers (with Grpc-Metadata- prefix)
|
||||||
|
// and OpenTelemetry trace propagation headers (traceparent, tracestate, baggage)
|
||||||
|
// to ensure distributed tracing works correctly across the gateway.
|
||||||
func ProcessHeaders(header http.Header) []string {
|
func ProcessHeaders(header http.Header) []string {
|
||||||
var headers []string
|
var headers []string
|
||||||
|
|
||||||
for k, v := range header {
|
for k, v := range header {
|
||||||
|
// Forward OpenTelemetry trace propagation headers
|
||||||
|
// These must be lowercase per gRPC metadata conventions
|
||||||
|
if lowerKey := strings.ToLower(k); traceHeaders[lowerKey] {
|
||||||
|
for _, vv := range v {
|
||||||
|
headers = append(headers, lowerKey+":"+vv)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Forward custom metadata headers with Grpc-Metadata- prefix
|
||||||
if !strings.HasPrefix(k, metadataHeaderPrefix) {
|
if !strings.HasPrefix(k, metadataHeaderPrefix) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
key := fmt.Sprintf("%s%s", metadataPrefix, strings.TrimPrefix(k, metadataHeaderPrefix))
|
// gRPC metadata keys are case-insensitive and stored as lowercase,
|
||||||
|
// so we lowercase the key to match gRPC conventions
|
||||||
|
trimmedKey := strings.TrimPrefix(k, metadataHeaderPrefix)
|
||||||
|
key := strings.ToLower(fmt.Sprintf("%s%s", metadataPrefix, trimmedKey))
|
||||||
for _, vv := range v {
|
for _, vv := range v {
|
||||||
headers = append(headers, key+":"+vv)
|
headers = append(headers, key+":"+vv)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,5 +18,93 @@ func TestBuildHeadersWithValues(t *testing.T) {
|
|||||||
req := httptest.NewRequest("GET", "/", http.NoBody)
|
req := httptest.NewRequest("GET", "/", http.NoBody)
|
||||||
req.Header.Add("grpc-metadata-a", "b")
|
req.Header.Add("grpc-metadata-a", "b")
|
||||||
req.Header.Add("grpc-metadata-b", "b")
|
req.Header.Add("grpc-metadata-b", "b")
|
||||||
assert.ElementsMatch(t, []string{"gateway-A:b", "gateway-B:b"}, ProcessHeaders(req.Header))
|
assert.ElementsMatch(t, []string{"gateway-a:b", "gateway-b:b"}, ProcessHeaders(req.Header))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessHeadersWithTraceContext(t *testing.T) {
|
||||||
|
req := httptest.NewRequest("GET", "/", http.NoBody)
|
||||||
|
req.Header.Set("traceparent", "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01")
|
||||||
|
req.Header.Set("tracestate", "key1=value1,key2=value2")
|
||||||
|
req.Header.Set("baggage", "userId=alice,serverNode=DF:28")
|
||||||
|
|
||||||
|
headers := ProcessHeaders(req.Header)
|
||||||
|
|
||||||
|
assert.Len(t, headers, 3)
|
||||||
|
assert.Contains(t, headers, "traceparent:00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01")
|
||||||
|
assert.Contains(t, headers, "tracestate:key1=value1,key2=value2")
|
||||||
|
assert.Contains(t, headers, "baggage:userId=alice,serverNode=DF:28")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessHeadersWithMixedHeaders(t *testing.T) {
|
||||||
|
req := httptest.NewRequest("GET", "/", http.NoBody)
|
||||||
|
req.Header.Set("traceparent", "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01")
|
||||||
|
req.Header.Set("grpc-metadata-custom", "value1")
|
||||||
|
req.Header.Set("content-type", "application/json")
|
||||||
|
req.Header.Set("tracestate", "key1=value1")
|
||||||
|
|
||||||
|
headers := ProcessHeaders(req.Header)
|
||||||
|
|
||||||
|
// Should include trace headers and grpc-metadata headers, but not regular headers
|
||||||
|
assert.Len(t, headers, 3)
|
||||||
|
assert.Contains(t, headers, "traceparent:00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01")
|
||||||
|
assert.Contains(t, headers, "tracestate:key1=value1")
|
||||||
|
assert.Contains(t, headers, "gateway-custom:value1")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessHeadersTraceparentCaseInsensitive(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
headerKey string
|
||||||
|
headerVal string
|
||||||
|
expectedKey string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "lowercase traceparent",
|
||||||
|
headerKey: "traceparent",
|
||||||
|
headerVal: "00-trace-span-01",
|
||||||
|
expectedKey: "traceparent",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "uppercase Traceparent",
|
||||||
|
headerKey: "Traceparent",
|
||||||
|
headerVal: "00-trace-span-01",
|
||||||
|
expectedKey: "traceparent",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "mixed case TraceParent",
|
||||||
|
headerKey: "TraceParent",
|
||||||
|
headerVal: "00-trace-span-01",
|
||||||
|
expectedKey: "traceparent",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "lowercase tracestate",
|
||||||
|
headerKey: "tracestate",
|
||||||
|
headerVal: "key=value",
|
||||||
|
expectedKey: "tracestate",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "mixed case TraceState",
|
||||||
|
headerKey: "TraceState",
|
||||||
|
headerVal: "key=value",
|
||||||
|
expectedKey: "tracestate",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
req := httptest.NewRequest("GET", "/", http.NoBody)
|
||||||
|
req.Header.Set(tt.headerKey, tt.headerVal)
|
||||||
|
|
||||||
|
headers := ProcessHeaders(req.Header)
|
||||||
|
|
||||||
|
assert.Len(t, headers, 1)
|
||||||
|
assert.Contains(t, headers, tt.expectedKey+":"+tt.headerVal)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessHeadersEmptyHeaders(t *testing.T) {
|
||||||
|
req := httptest.NewRequest("GET", "/", http.NoBody)
|
||||||
|
headers := ProcessHeaders(req.Header)
|
||||||
|
assert.Empty(t, headers)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -329,8 +329,9 @@ func createDescriptorSource(cli zrpc.Client, up Upstream) (grpcurl.DescriptorSou
|
|||||||
return source, nil
|
return source, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// withDialer sets a dialer to create a gRPC client.
|
// WithDialer sets a dialer to create a gRPC client.
|
||||||
func withDialer(dialer func(conf zrpc.RpcClientConf) zrpc.Client) func(*Server) {
|
// This allows customization of gRPC client options, such as message size limits.
|
||||||
|
func WithDialer(dialer func(conf zrpc.RpcClientConf) zrpc.Client) func(*Server) {
|
||||||
return func(s *Server) {
|
return func(s *Server) {
|
||||||
s.dialer = dialer
|
s.dialer = dialer
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -54,7 +54,7 @@ func TestMustNewServer(t *testing.T) {
|
|||||||
c.Host = "localhost"
|
c.Host = "localhost"
|
||||||
c.Port = 18881
|
c.Port = 18881
|
||||||
|
|
||||||
s := MustNewServer(c, withDialer(func(conf zrpc.RpcClientConf) zrpc.Client {
|
s := MustNewServer(c, WithDialer(func(conf zrpc.RpcClientConf) zrpc.Client {
|
||||||
return zrpc.MustNewClient(conf, zrpc.WithDialOption(grpc.WithContextDialer(dialer())))
|
return zrpc.MustNewClient(conf, zrpc.WithDialOption(grpc.WithContextDialer(dialer())))
|
||||||
}), WithHeaderProcessor(func(header http.Header) []string {
|
}), WithHeaderProcessor(func(header http.Header) []string {
|
||||||
return []string{"foo"}
|
return []string{"foo"}
|
||||||
|
|||||||
153
go.mod
153
go.mod
@@ -1,124 +1,139 @@
|
|||||||
module github.com/zeromicro/go-zero
|
module github.com/zeromicro/go-zero
|
||||||
|
|
||||||
go 1.21
|
go 1.24.0
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/DATA-DOG/go-sqlmock v1.5.2
|
github.com/DATA-DOG/go-sqlmock v1.5.2
|
||||||
github.com/alicebob/miniredis/v2 v2.35.0
|
github.com/alicebob/miniredis/v2 v2.37.0
|
||||||
github.com/fatih/color v1.18.0
|
github.com/fatih/color v1.18.0
|
||||||
github.com/fullstorydev/grpcurl v1.9.3
|
github.com/fullstorydev/grpcurl v1.9.3
|
||||||
github.com/go-sql-driver/mysql v1.9.0
|
github.com/go-sql-driver/mysql v1.9.3
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.2
|
github.com/golang-jwt/jwt/v4 v4.5.2
|
||||||
github.com/golang/protobuf v1.5.4
|
github.com/golang/protobuf v1.5.4
|
||||||
github.com/google/uuid v1.6.0
|
github.com/google/uuid v1.6.0
|
||||||
github.com/grafana/pyroscope-go v1.2.4
|
github.com/grafana/pyroscope-go v1.2.8
|
||||||
github.com/jackc/pgx/v5 v5.7.4
|
github.com/jackc/pgx/v5 v5.8.0
|
||||||
github.com/jhump/protoreflect v1.17.0
|
github.com/jhump/protoreflect v1.18.0
|
||||||
github.com/pelletier/go-toml/v2 v2.2.2
|
github.com/modelcontextprotocol/go-sdk v1.4.0
|
||||||
github.com/prometheus/client_golang v1.21.1
|
github.com/pelletier/go-toml/v2 v2.3.0
|
||||||
github.com/redis/go-redis/v9 v9.12.1
|
github.com/prometheus/client_golang v1.23.2
|
||||||
|
github.com/redis/go-redis/v9 v9.18.0
|
||||||
github.com/spaolacci/murmur3 v1.1.0
|
github.com/spaolacci/murmur3 v1.1.0
|
||||||
github.com/stretchr/testify v1.10.0
|
github.com/stretchr/testify v1.11.1
|
||||||
go.etcd.io/etcd/api/v3 v3.5.15
|
github.com/titanous/json5 v1.0.0
|
||||||
go.etcd.io/etcd/client/v3 v3.5.15
|
go.etcd.io/etcd/api/v3 v3.5.21
|
||||||
go.mongodb.org/mongo-driver/v2 v2.3.0
|
go.etcd.io/etcd/client/v3 v3.5.21
|
||||||
go.opentelemetry.io/otel v1.24.0
|
go.mongodb.org/mongo-driver/v2 v2.6.0
|
||||||
go.opentelemetry.io/otel/exporters/jaeger v1.17.0
|
go.opentelemetry.io/otel v1.40.0
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0
|
||||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.24.0
|
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0
|
||||||
go.opentelemetry.io/otel/exporters/zipkin v1.24.0
|
go.opentelemetry.io/otel/exporters/zipkin v1.40.0
|
||||||
go.opentelemetry.io/otel/sdk v1.24.0
|
go.opentelemetry.io/otel/sdk v1.40.0
|
||||||
go.opentelemetry.io/otel/trace v1.24.0
|
go.opentelemetry.io/otel/trace v1.40.0
|
||||||
go.uber.org/automaxprocs v1.6.0
|
go.uber.org/automaxprocs v1.6.0
|
||||||
go.uber.org/goleak v1.3.0
|
go.uber.org/goleak v1.3.0
|
||||||
go.uber.org/mock v0.4.0
|
go.uber.org/mock v0.6.0
|
||||||
golang.org/x/net v0.35.0
|
golang.org/x/net v0.50.0
|
||||||
golang.org/x/sys v0.30.0
|
golang.org/x/sys v0.41.0
|
||||||
golang.org/x/time v0.10.0
|
golang.org/x/time v0.14.0
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20240711142825-46eb208f015d
|
google.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409
|
||||||
google.golang.org/grpc v1.65.0
|
google.golang.org/grpc v1.80.0
|
||||||
google.golang.org/protobuf v1.36.5
|
google.golang.org/protobuf v1.36.11
|
||||||
gopkg.in/cheggaaa/pb.v1 v1.0.28
|
gopkg.in/cheggaaa/pb.v1 v1.0.28
|
||||||
gopkg.in/h2non/gock.v1 v1.1.2
|
gopkg.in/h2non/gock.v1 v1.1.2
|
||||||
gopkg.in/yaml.v2 v2.4.0
|
gopkg.in/yaml.v2 v2.4.0
|
||||||
k8s.io/api v0.29.3
|
k8s.io/api v0.34.3
|
||||||
k8s.io/apimachinery v0.29.4
|
k8s.io/apimachinery v0.34.3
|
||||||
k8s.io/client-go v0.29.3
|
k8s.io/client-go v0.34.3
|
||||||
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
|
k8s.io/utils v0.0.0-20260319190234-28399d86e0b5
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
filippo.io/edwards25519 v1.1.0 // indirect
|
filippo.io/edwards25519 v1.1.0 // indirect
|
||||||
github.com/beorn7/perks v1.0.1 // indirect
|
github.com/beorn7/perks v1.0.1 // indirect
|
||||||
github.com/bufbuild/protocompile v0.14.1 // indirect
|
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
|
||||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b // indirect
|
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 // indirect
|
||||||
github.com/coreos/go-semver v0.3.1 // indirect
|
github.com/coreos/go-semver v0.3.1 // indirect
|
||||||
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
|
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
|
||||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
|
||||||
github.com/envoyproxy/go-control-plane v0.12.0 // indirect
|
github.com/envoyproxy/go-control-plane/envoy v1.36.0 // indirect
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.0.4 // indirect
|
github.com/envoyproxy/protoc-gen-validate v1.3.0 // indirect
|
||||||
github.com/go-logr/logr v1.4.2 // indirect
|
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
|
||||||
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
github.com/go-logr/stdr v1.2.2 // indirect
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
github.com/go-openapi/jsonpointer v0.19.6 // indirect
|
github.com/go-openapi/jsonpointer v0.21.0 // indirect
|
||||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||||
github.com/go-openapi/swag v0.22.4 // indirect
|
github.com/go-openapi/swag v0.23.0 // indirect
|
||||||
github.com/gogo/protobuf v1.3.2 // indirect
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
github.com/golang/snappy v1.0.0 // indirect
|
github.com/google/gnostic-models v0.7.0 // indirect
|
||||||
github.com/google/gnostic-models v0.6.8 // indirect
|
github.com/google/go-cmp v0.7.0 // indirect
|
||||||
github.com/google/go-cmp v0.6.0 // indirect
|
github.com/google/jsonschema-go v0.4.2 // indirect
|
||||||
github.com/google/gofuzz v1.2.0 // indirect
|
github.com/grafana/pyroscope-go/godeltaprof v0.1.9 // indirect
|
||||||
github.com/grafana/pyroscope-go/godeltaprof v0.1.8 // indirect
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 // indirect
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
|
|
||||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 // indirect
|
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 // indirect
|
||||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||||
|
github.com/jhump/protoreflect/v2 v2.0.0-beta.1 // indirect
|
||||||
github.com/josharian/intern v1.0.0 // indirect
|
github.com/josharian/intern v1.0.0 // indirect
|
||||||
github.com/json-iterator/go v1.1.12 // indirect
|
github.com/json-iterator/go v1.1.12 // indirect
|
||||||
github.com/klauspost/compress v1.17.11 // indirect
|
github.com/klauspost/compress v1.18.0 // indirect
|
||||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||||
github.com/mailru/easyjson v0.7.7 // indirect
|
github.com/mailru/easyjson v0.7.7 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
github.com/mattn/go-runewidth v0.0.15 // indirect
|
github.com/mattn/go-runewidth v0.0.15 // indirect
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||||
github.com/openzipkin/zipkin-go v0.4.3 // indirect
|
github.com/openzipkin/zipkin-go v0.4.3 // indirect
|
||||||
|
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 // indirect
|
||||||
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||||
github.com/prometheus/client_model v0.6.1 // indirect
|
github.com/prometheus/client_model v0.6.2 // indirect
|
||||||
github.com/prometheus/common v0.62.0 // indirect
|
github.com/prometheus/common v0.66.1 // indirect
|
||||||
github.com/prometheus/procfs v0.15.1 // indirect
|
github.com/prometheus/procfs v0.16.1 // indirect
|
||||||
github.com/rivo/uniseg v0.2.0 // indirect
|
github.com/rivo/uniseg v0.2.0 // indirect
|
||||||
|
github.com/segmentio/asm v1.1.3 // indirect
|
||||||
|
github.com/segmentio/encoding v0.5.3 // indirect
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
|
||||||
github.com/stretchr/objx v0.5.2 // indirect
|
github.com/stretchr/objx v0.5.2 // indirect
|
||||||
|
github.com/x448/float16 v0.8.4 // indirect
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
||||||
github.com/xdg-go/scram v1.1.2 // indirect
|
github.com/xdg-go/scram v1.2.0 // indirect
|
||||||
github.com/xdg-go/stringprep v1.0.4 // indirect
|
github.com/xdg-go/stringprep v1.0.4 // indirect
|
||||||
|
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
||||||
github.com/yuin/gopher-lua v1.1.1 // indirect
|
github.com/yuin/gopher-lua v1.1.1 // indirect
|
||||||
go.etcd.io/etcd/client/pkg/v3 v3.5.15 // indirect
|
go.etcd.io/etcd/client/pkg/v3 v3.5.21 // indirect
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 // indirect
|
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||||
go.opentelemetry.io/otel/metric v1.24.0 // indirect
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 // indirect
|
||||||
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
|
go.opentelemetry.io/otel/metric v1.40.0 // indirect
|
||||||
go.uber.org/atomic v1.10.0 // indirect
|
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
|
||||||
|
go.uber.org/atomic v1.11.0 // indirect
|
||||||
go.uber.org/multierr v1.9.0 // indirect
|
go.uber.org/multierr v1.9.0 // indirect
|
||||||
go.uber.org/zap v1.24.0 // indirect
|
go.uber.org/zap v1.24.0 // indirect
|
||||||
golang.org/x/crypto v0.33.0 // indirect
|
go.yaml.in/yaml/v2 v2.4.2 // indirect
|
||||||
golang.org/x/oauth2 v0.24.0 // indirect
|
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||||
golang.org/x/sync v0.11.0 // indirect
|
golang.org/x/crypto v0.48.0 // indirect
|
||||||
golang.org/x/term v0.29.0 // indirect
|
golang.org/x/oauth2 v0.34.0 // indirect
|
||||||
golang.org/x/text v0.22.0 // indirect
|
golang.org/x/sync v0.19.0 // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 // indirect
|
golang.org/x/term v0.40.0 // indirect
|
||||||
|
golang.org/x/text v0.34.0 // indirect
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409 // indirect
|
||||||
|
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
|
||||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
k8s.io/klog/v2 v2.110.1 // indirect
|
k8s.io/klog/v2 v2.130.1 // indirect
|
||||||
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
|
k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b // indirect
|
||||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
|
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
|
||||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
|
sigs.k8s.io/randfill v1.0.0 // indirect
|
||||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
|
||||||
|
sigs.k8s.io/yaml v1.6.0 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
342
go.sum
342
go.sum
@@ -2,8 +2,8 @@ filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
|||||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||||
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
||||||
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
|
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
|
||||||
github.com/alicebob/miniredis/v2 v2.35.0 h1:QwLphYqCEAo1eu1TqPRN2jgVMPBweeQcR21jeqDCONI=
|
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||||
github.com/alicebob/miniredis/v2 v2.35.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||||
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
||||||
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||||
@@ -14,12 +14,12 @@ github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
|||||||
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
||||||
github.com/bufbuild/protocompile v0.14.1 h1:iA73zAf/fyljNjQKwYzUHD6AD4R8KMasmwa/FBatYVw=
|
github.com/bufbuild/protocompile v0.14.1 h1:iA73zAf/fyljNjQKwYzUHD6AD4R8KMasmwa/FBatYVw=
|
||||||
github.com/bufbuild/protocompile v0.14.1/go.mod h1:ppVdAIhbr2H8asPk6k4pY7t9zB1OU5DoEw9xY/FUi1c=
|
github.com/bufbuild/protocompile v0.14.1/go.mod h1:ppVdAIhbr2H8asPk6k4pY7t9zB1OU5DoEw9xY/FUi1c=
|
||||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
||||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b h1:ga8SEFjZ60pxLcmhnThWgvH2wg8376yUJmPhEH4H3kw=
|
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5 h1:6xNmx7iTtyBRev0+D/Tv1FZd4SCg8axKApyNyRsAt/w=
|
||||||
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI=
|
||||||
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
|
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
|
||||||
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
|
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
|
||||||
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
|
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
|
||||||
@@ -30,72 +30,78 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
|
|||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||||
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
|
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
|
||||||
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||||
github.com/envoyproxy/go-control-plane v0.12.0 h1:4X+VP1GHd1Mhj6IB5mMeGbLCleqxjletLK6K0rbxyZI=
|
github.com/envoyproxy/go-control-plane/envoy v1.36.0 h1:yg/JjO5E7ubRyKX3m07GF3reDNEnfOboJ0QySbH736g=
|
||||||
github.com/envoyproxy/go-control-plane v0.12.0/go.mod h1:ZBTaoJ23lqITozF0M6G4/IragXCQKCnYbmlmtHvwRG0=
|
github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98=
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A=
|
github.com/envoyproxy/protoc-gen-validate v1.3.0 h1:TvGH1wof4H33rezVKWSpqKz5NXWg5VPuZ0uONDT6eb4=
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew=
|
github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA=
|
||||||
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||||
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
||||||
github.com/fullstorydev/grpcurl v1.9.3 h1:PC1Xi3w+JAvEE2Tg2Gf2RfVgPbf9+tbuQr1ZkyVU3jk=
|
github.com/fullstorydev/grpcurl v1.9.3 h1:PC1Xi3w+JAvEE2Tg2Gf2RfVgPbf9+tbuQr1ZkyVU3jk=
|
||||||
github.com/fullstorydev/grpcurl v1.9.3/go.mod h1:/b4Wxe8bG6ndAjlfSUjwseQReUDUvBJiFEB7UllOlUE=
|
github.com/fullstorydev/grpcurl v1.9.3/go.mod h1:/b4Wxe8bG6ndAjlfSUjwseQReUDUvBJiFEB7UllOlUE=
|
||||||
|
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
|
||||||
|
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
||||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||||
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
|
||||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||||
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
|
|
||||||
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||||
|
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
|
||||||
|
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||||
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||||
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
|
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
|
||||||
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
||||||
github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
|
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||||
github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||||
github.com/go-sql-driver/mysql v1.9.0 h1:Y0zIbQXhQKmQgTp44Y1dp3wTXcn804QoTptLZT1vtvo=
|
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
||||||
github.com/go-sql-driver/mysql v1.9.0/go.mod h1:pDetrLJeA3oMujJuvXc8RJoasr589B6A9fwzD3QMrqw=
|
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
|
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
|
||||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
|
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||||
|
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
|
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||||
|
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||||
|
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||||
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
|
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
|
||||||
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
|
||||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
|
||||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
|
||||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
github.com/google/jsonschema-go v0.4.2 h1:tmrUohrwoLZZS/P3x7ex0WAVknEkBZM46iALbcqoRA8=
|
||||||
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/jsonschema-go v0.4.2/go.mod h1:r5quNTdLOYEz95Ru18zA0ydNbBuYoo9tgaYcxEYhJVE=
|
||||||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec=
|
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
|
||||||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/grafana/pyroscope-go v1.2.4 h1:B22GMXz+O0nWLatxLuaP7o7L9dvP0clLvIpmeEQQM0Q=
|
github.com/grafana/pyroscope-go v1.2.8 h1:UvCwIhlx9DeV7F6TW/z8q1Mi4PIm3vuUJ2ZlCEvmA4M=
|
||||||
github.com/grafana/pyroscope-go v1.2.4/go.mod h1:zzT9QXQAp2Iz2ZdS216UiV8y9uXJYQiGE1q8v1FyhqU=
|
github.com/grafana/pyroscope-go v1.2.8/go.mod h1:SSi59eQ1/zmKoY/BKwa5rSFsJaq+242Bcrr4wPix1g8=
|
||||||
github.com/grafana/pyroscope-go/godeltaprof v0.1.8 h1:iwOtYXeeVSAeYefJNaxDytgjKtUuKQbJqgAIjlnicKg=
|
github.com/grafana/pyroscope-go/godeltaprof v0.1.9 h1:c1Us8i6eSmkW+Ez05d3co8kasnuOY813tbMN8i/a3Og=
|
||||||
github.com/grafana/pyroscope-go/godeltaprof v0.1.8/go.mod h1:2+l7K7twW49Ct4wFluZD3tZ6e0SjanjcUUBPVD/UuGU=
|
github.com/grafana/pyroscope-go/godeltaprof v0.1.9/go.mod h1:2+l7K7twW49Ct4wFluZD3tZ6e0SjanjcUUBPVD/UuGU=
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 h1:X+2YciYSxvMQK0UZ7sg45ZVabVZBeBuvMkmuI2V3Fak=
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7/go.mod h1:lW34nIZuQ8UDPdkon5fmfp2l3+ZkQ2me/+oecHYLOII=
|
||||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw=
|
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw=
|
||||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
|
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||||
github.com/jackc/pgx/v5 v5.7.4 h1:9wKznZrhWa2QiHL+NjTSPP6yjl3451BX3imWDnokYlg=
|
github.com/jackc/pgx/v5 v5.8.0 h1:TYPDoleBBme0xGSAX3/+NujXXtpZn9HBONkQC7IEZSo=
|
||||||
github.com/jackc/pgx/v5 v5.7.4/go.mod h1:ncY89UGWxg82EykZUwSpUKEfccBGGYq1xjrOpsbsfGQ=
|
github.com/jackc/pgx/v5 v5.8.0/go.mod h1:QVeDInX2m9VyzvNeiCJVjCkNFqzsNb43204HshNSZKw=
|
||||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||||
github.com/jhump/protoreflect v1.17.0 h1:qOEr613fac2lOuTgWN4tPAtLL7fUSbuJL5X5XumQh94=
|
github.com/jhump/protoreflect v1.18.0 h1:TOz0MSR/0JOZ5kECB/0ufGnC2jdsgZ123Rd/k4Z5/2w=
|
||||||
github.com/jhump/protoreflect v1.17.0/go.mod h1:h9+vUUL38jiBzck8ck+6G/aeMX8Z4QUY/NiJPwPNi+8=
|
github.com/jhump/protoreflect v1.18.0/go.mod h1:ezWcltJIVF4zYdIFM+D/sHV4Oh5LNU08ORzCGfwvTz8=
|
||||||
|
github.com/jhump/protoreflect/v2 v2.0.0-beta.1 h1:Dw1rslK/VotaUGYsv53XVWITr+5RCPXfvvlGrM/+B6w=
|
||||||
|
github.com/jhump/protoreflect/v2 v2.0.0-beta.1/go.mod h1:D9LBEowZyv8/iSu97FU2zmXG3JxVTmNw21mu63niFzU=
|
||||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||||
@@ -103,8 +109,10 @@ github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHm
|
|||||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
|
github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
|
||||||
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
|
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||||
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
|
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||||
|
github.com/klauspost/cpuid/v2 v2.0.9 h1:lgaqFMSdTdQYdZ04uHyN2d/eKdOMyi2YLSvlQIBFYa4=
|
||||||
|
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||||
@@ -123,47 +131,62 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
|
|||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
|
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
|
||||||
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||||
|
github.com/modelcontextprotocol/go-sdk v1.4.0 h1:u0kr8lbJc1oBcawK7Df+/ajNMpIDFE41OEPxdeTLOn8=
|
||||||
|
github.com/modelcontextprotocol/go-sdk v1.4.0/go.mod h1:Nxc2n+n/GdCebUaqCOhTetptS17SXXNu9IfNTaLDi1E=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
|
||||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||||
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
|
||||||
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||||
github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32 h1:W6apQkHrMkS0Muv8G/TipAy/FJl/rCYT0+EuS8+Z0z4=
|
github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32 h1:W6apQkHrMkS0Muv8G/TipAy/FJl/rCYT0+EuS8+Z0z4=
|
||||||
github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32/go.mod h1:9wM+0iRr9ahx58uYLpLIr5fm8diHn0JbqRycJi6w0Ms=
|
github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32/go.mod h1:9wM+0iRr9ahx58uYLpLIr5fm8diHn0JbqRycJi6w0Ms=
|
||||||
github.com/onsi/ginkgo/v2 v2.13.0 h1:0jY9lJquiL8fcf3M4LAXN5aMlS/b2BV86HFFPCPMgE4=
|
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
|
||||||
github.com/onsi/ginkgo/v2 v2.13.0/go.mod h1:TE309ZR8s5FsKKpuB1YAQYBzCaAfUgatB/xlT/ETL/o=
|
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
|
||||||
github.com/onsi/gomega v1.29.0 h1:KIA/t2t5UBzoirT4H9tsML45GEbo3ouUnBHsCfD2tVg=
|
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
|
||||||
github.com/onsi/gomega v1.29.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ=
|
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
|
||||||
github.com/openzipkin/zipkin-go v0.4.3 h1:9EGwpqkgnwdEIJ+Od7QVSEIH+ocmm5nPat0G7sjsSdg=
|
github.com/openzipkin/zipkin-go v0.4.3 h1:9EGwpqkgnwdEIJ+Od7QVSEIH+ocmm5nPat0G7sjsSdg=
|
||||||
github.com/openzipkin/zipkin-go v0.4.3/go.mod h1:M9wCJZFWCo2RiY+o1eBCEMe0Dp2S5LDHcMZmk3RmK7c=
|
github.com/openzipkin/zipkin-go v0.4.3/go.mod h1:M9wCJZFWCo2RiY+o1eBCEMe0Dp2S5LDHcMZmk3RmK7c=
|
||||||
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
|
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
||||||
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
|
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||||
|
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 h1:KPpdlQLZcHfTMQRi6bFQ7ogNO0ltFT4PmtwTLW4W+14=
|
||||||
|
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
|
||||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
|
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
|
||||||
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
|
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
|
||||||
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
|
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
|
||||||
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
|
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
|
||||||
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
|
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
|
||||||
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
|
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
|
||||||
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
|
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
|
||||||
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
|
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
|
||||||
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
|
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
|
||||||
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
|
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
|
||||||
github.com/redis/go-redis/v9 v9.12.1 h1:k5iquqv27aBtnTm2tIkROUDp8JBXhXZIVu1InSgvovg=
|
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||||
github.com/redis/go-redis/v9 v9.12.1/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
|
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||||
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
|
github.com/robertkrimen/otto v0.2.1 h1:FVP0PJ0AHIjC+N4pKCG9yCDz6LHNPCwi/GKID5pGGF0=
|
||||||
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
|
github.com/robertkrimen/otto v0.2.1/go.mod h1:UPwtJ1Xu7JrLcZjNWN8orJaM5n5YEtqL//farB5FlRY=
|
||||||
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
|
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||||
|
github.com/segmentio/asm v1.1.3 h1:WM03sfUOENvvKexOLp+pCqgb/WDjsi7EK8gIsICtzhc=
|
||||||
|
github.com/segmentio/asm v1.1.3/go.mod h1:Ld3L4ZXGNcSLRg4JBsZ3//1+f/TjYl0Mzen/DQy1EJg=
|
||||||
|
github.com/segmentio/encoding v0.5.3 h1:OjMgICtcSFuNvQCdwqMCv9Tg7lEOXGwm1J5RPQccx6w=
|
||||||
|
github.com/segmentio/encoding v0.5.3/go.mod h1:HS1ZKa3kSN32ZHVZ7ZLPLXWvOVIiZtyJnO1gPH1sKt0=
|
||||||
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
|
||||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.6.0 h1:l+DolpxNWYgruGQVV0xsfeya3CsC7m8iBzDnMpsbLuo=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
@@ -174,16 +197,20 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
|||||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
github.com/titanous/json5 v1.0.0 h1:hJf8Su1d9NuI/ffpxgxQfxh/UiBFZX7bMPid0rIL/7s=
|
||||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/titanous/json5 v1.0.0/go.mod h1:7JH1M8/LHKc6cyP5o5g3CSaRj+mBrIimTxzpvmckH8c=
|
||||||
|
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||||
|
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||||
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
|
github.com/xdg-go/scram v1.2.0 h1:bYKF2AEwG5rqd1BumT4gAnvwU/M9nBp2pTSxeZw7Wvs=
|
||||||
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
|
github.com/xdg-go/scram v1.2.0/go.mod h1:3dlrS0iBaWKYVt2ZfA4cj48umJZ+cAEbR6/SjLA88I8=
|
||||||
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
|
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
|
||||||
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
||||||
|
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||||
|
github.com/yosida95/uritemplate/v3 v3.0.2/go.mod h1:ILOh0sOhIJR3+L/8afwt/kE++YT040gmv5BQTMR2HP4=
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
||||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
@@ -191,54 +218,62 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
|
|||||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||||
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
||||||
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
||||||
go.etcd.io/etcd/api/v3 v3.5.15 h1:3KpLJir1ZEBrYuV2v+Twaa/e2MdDCEZ/70H+lzEiwsk=
|
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
||||||
go.etcd.io/etcd/api/v3 v3.5.15/go.mod h1:N9EhGzXq58WuMllgH9ZvnEr7SI9pS0k0+DHZezGp7jM=
|
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
||||||
go.etcd.io/etcd/client/pkg/v3 v3.5.15 h1:fo0HpWz/KlHGMCC+YejpiCmyWDEuIpnTDzpJLB5fWlA=
|
go.etcd.io/etcd/api/v3 v3.5.21 h1:A6O2/JDb3tvHhiIz3xf9nJ7REHvtEFJJ3veW3FbCnS8=
|
||||||
go.etcd.io/etcd/client/pkg/v3 v3.5.15/go.mod h1:mXDI4NAOwEiszrHCb0aqfAYNCrZP4e9hRca3d1YK8EU=
|
go.etcd.io/etcd/api/v3 v3.5.21/go.mod h1:c3aH5wcvXv/9dqIw2Y810LDXJfhSYdHQ0vxmP3CCHVY=
|
||||||
go.etcd.io/etcd/client/v3 v3.5.15 h1:23M0eY4Fd/inNv1ZfU3AxrbbOdW79r9V9Rl62Nm6ip4=
|
go.etcd.io/etcd/client/pkg/v3 v3.5.21 h1:lPBu71Y7osQmzlflM9OfeIV2JlmpBjqBNlLtcoBqUTc=
|
||||||
go.etcd.io/etcd/client/v3 v3.5.15/go.mod h1:CLSJxrYjvLtHsrPKsy7LmZEE+DK2ktfd2bN4RhBMwlU=
|
go.etcd.io/etcd/client/pkg/v3 v3.5.21/go.mod h1:BgqT/IXPjK9NkeSDjbzwsHySX3yIle2+ndz28nVsjUs=
|
||||||
go.mongodb.org/mongo-driver/v2 v2.3.0 h1:sh55yOXA2vUjW1QYw/2tRlHSQViwDyPnW61AwpZ4rtU=
|
go.etcd.io/etcd/client/v3 v3.5.21 h1:T6b1Ow6fNjOLOtM0xSoKNQt1ASPCLWrF9XMHcH9pEyY=
|
||||||
go.mongodb.org/mongo-driver/v2 v2.3.0/go.mod h1:jHeEDJHJq7tm6ZF45Issun9dbogjfnPySb1vXA7EeAI=
|
go.etcd.io/etcd/client/v3 v3.5.21/go.mod h1:mFYy67IOqmbRf/kRUvsHixzo3iG+1OF2W2+jVIQRAnU=
|
||||||
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo=
|
go.mongodb.org/mongo-driver/v2 v2.6.0 h1:b9sJOYrkmt4l8bY43ZenFBcPlhYIjaOfYHLtbB/5qi8=
|
||||||
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo=
|
go.mongodb.org/mongo-driver/v2 v2.6.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0=
|
||||||
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.opentelemetry.io/otel/exporters/jaeger v1.17.0/go.mod h1:nPCqOnEH9rNLKqH/+rrUjiMzHJdV1BlpKcTwRTyKkKI=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 h1:t6wl9SPayj+c7lEIFgm4ooDBZVb01IhLB4InpomhRw8=
|
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0/go.mod h1:iSDOcsnSA5INXzZtwaBPrKp/lWu/V14Dd+llD0oI2EA=
|
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0 h1:Mw5xcxMwlqoJd97vwPxA8isEaIoxsta9/Q51+TTJLGE=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 h1:QKdN8ly8zEMrByybbQgv8cWBcdAarwmIPZ6FThrWXJs=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0/go.mod h1:CQNu9bj7o7mC6U7+CA/schKEYakYXWr79ucDHTMGhCM=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0/go.mod h1:bTdK1nhqF76qiPoCCdyFIV+N/sRHYXYCTQc+3VCi3MI=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 h1:Xw8U6u2f8DK2XAkGRFV7BBLENgnTGX9i4rQRxJf+/vs=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0 h1:DvJDOPmSWQHWywQS6lKL+pb8s3gBLOZUtw4N+mavW1I=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0/go.mod h1:6KW1Fm6R/s6Z3PGXwSJN2K4eT6wQB3vXX6CVnYX9NmM=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0/go.mod h1:EtekO9DEJb4/jRyN4v4Qjc2yA7AtfCBuz2FynRUWTXs=
|
||||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.24.0 h1:s0PHtIkN+3xrbDOpt2M8OTG92cWqUESvzh2MxiR5xY8=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 h1:wVZXIWjQSeSmMoxF74LzAnpVQOAFDo3pPji9Y4SOFKc=
|
||||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.24.0/go.mod h1:hZlFbDbRt++MMPCCfSJfmhkGIWnX1h3XjkfxZUjLrIA=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0/go.mod h1:khvBS2IggMFNwZK/6lEeHg/W57h/IX6J4URh57fuI40=
|
||||||
go.opentelemetry.io/otel/exporters/zipkin v1.24.0 h1:3evrL5poBuh1KF51D9gO/S+N/1msnm4DaBqs/rpXUqY=
|
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0 h1:MzfofMZN8ulNqobCmCAVbqVL5syHw+eB2qPRkCMA/fQ=
|
||||||
go.opentelemetry.io/otel/exporters/zipkin v1.24.0/go.mod h1:0EHgD8R0+8yRhUYJOGR8Hfg2dpiJQxDOszd5smVO9wM=
|
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0/go.mod h1:E73G9UFtKRXrxhBsHtG00TB5WxX57lpsQzogDkqBTz8=
|
||||||
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI=
|
go.opentelemetry.io/otel/exporters/zipkin v1.40.0 h1:zu+I4j+FdO6xIxBVPeuncQVbjxUM4LiMgv6GwGe9REE=
|
||||||
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
|
go.opentelemetry.io/otel/exporters/zipkin v1.40.0/go.mod h1:zS6cC4nFBYXbu18e7aLfMzubBjOiN7ZcROu477qtMf8=
|
||||||
go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw=
|
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
|
||||||
go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg=
|
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
|
||||||
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
|
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
|
||||||
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
|
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
|
||||||
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
|
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
|
||||||
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
|
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
|
||||||
go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
|
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
|
||||||
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
|
||||||
|
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
|
||||||
|
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
|
||||||
|
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||||
|
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||||
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
|
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
|
||||||
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
|
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
|
||||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||||
go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU=
|
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
||||||
go.uber.org/mock v0.4.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc=
|
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
||||||
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
|
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
|
||||||
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
|
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
|
||||||
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
|
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
|
||||||
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
|
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
|
||||||
|
go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
|
||||||
|
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
|
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
|
||||||
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
|
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
|
||||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
@@ -248,16 +283,16 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
|
|||||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||||
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
|
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
|
||||||
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
|
golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM=
|
||||||
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
|
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
|
||||||
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
|
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
|
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||||
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
@@ -267,69 +302,76 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
|
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
||||||
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
|
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
|
||||||
golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
|
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||||
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
||||||
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
|
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
|
||||||
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
|
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
|
||||||
golang.org/x/time v0.10.0 h1:3usCWA8tQn0L8+hFJQNgzpWbd89begxN66o1Ojdn5L4=
|
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||||
golang.org/x/time v0.10.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
|
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
|
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
|
||||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
|
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20240711142825-46eb208f015d h1:kHjw/5UfflP/L5EbledDrcG4C2597RtymmGRZvHiCuY=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20240711142825-46eb208f015d/go.mod h1:mw8MG/Qz5wfgYr6VqVCiZcHe/GJEfI+oGGDCohaVgB0=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 h1:BwIjyKYGsK9dMCBOorzRri8MQwmi7mT9rGHsCEinZkA=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409 h1:merA0rdPeUV3YIIfHHcH4qBkiQAc1nfCKSI7lB4cV2M=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409/go.mod h1:fl8J1IvUjCilwZzQowmw2b7HQB2eAuYBabMXzWurF+I=
|
||||||
google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409 h1:H86B94AW+VfJWDqFeEbBPhEtHzJwJfTbgE2lZa54ZAQ=
|
||||||
google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
|
||||||
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
|
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||||
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
|
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||||
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
|
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||||
gopkg.in/cheggaaa/pb.v1 v1.0.28 h1:n1tBJnnK2r7g9OW2btFH91V92STTUevLXYFb8gy9EMk=
|
gopkg.in/cheggaaa/pb.v1 v1.0.28 h1:n1tBJnnK2r7g9OW2btFH91V92STTUevLXYFb8gy9EMk=
|
||||||
gopkg.in/cheggaaa/pb.v1 v1.0.28/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
gopkg.in/cheggaaa/pb.v1 v1.0.28/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
||||||
|
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
|
||||||
|
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
|
||||||
gopkg.in/h2non/gock.v1 v1.1.2 h1:jBbHXgGBK/AoPVfJh5x4r/WxIrElvbLel8TCZkkZJoY=
|
gopkg.in/h2non/gock.v1 v1.1.2 h1:jBbHXgGBK/AoPVfJh5x4r/WxIrElvbLel8TCZkkZJoY=
|
||||||
gopkg.in/h2non/gock.v1 v1.1.2/go.mod h1:n7UGz/ckNChHiK05rDoiC4MYSunEC/lyaUm2WWaDva0=
|
gopkg.in/h2non/gock.v1 v1.1.2/go.mod h1:n7UGz/ckNChHiK05rDoiC4MYSunEC/lyaUm2WWaDva0=
|
||||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/sourcemap.v1 v1.0.5 h1:inv58fC9f9J3TK2Y2R1NPntXEn3/wjWHkonhIUODNTI=
|
||||||
|
gopkg.in/sourcemap.v1 v1.0.5/go.mod h1:2RlvNNSMglmRrcvhfuzp4hQHwOtjxlbjX7UPY/GXb78=
|
||||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
k8s.io/api v0.29.3 h1:2ORfZ7+bGC3YJqGpV0KSDDEVf8hdGQ6A03/50vj8pmw=
|
k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4=
|
||||||
k8s.io/api v0.29.3/go.mod h1:y2yg2NTyHUUkIoTC+phinTnEa3KFM6RZ3szxt014a80=
|
k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk=
|
||||||
k8s.io/apimachinery v0.29.4 h1:RaFdJiDmuKs/8cm1M6Dh1Kvyh59YQFDcFuFTSmXes6Q=
|
k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE=
|
||||||
k8s.io/apimachinery v0.29.4/go.mod h1:i3FJVwhvSp/6n8Fl4K97PJEP8C+MM+aoDq4+ZJBf70Y=
|
k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw=
|
||||||
k8s.io/client-go v0.29.3 h1:R/zaZbEAxqComZ9FHeQwOh3Y1ZUs7FaHKZdQtIc2WZg=
|
k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A=
|
||||||
k8s.io/client-go v0.29.3/go.mod h1:tkDisCvgPfiRpxGnOORfkljmS+UrW+WtXAy2fTvXJB0=
|
k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM=
|
||||||
k8s.io/klog/v2 v2.110.1 h1:U/Af64HJf7FcwMcXyKm2RPM22WZzyR7OSpYj5tg3cL0=
|
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
|
||||||
k8s.io/klog/v2 v2.110.1/go.mod h1:YGtd1984u+GgbuZ7e08/yBuAfKLSO0+uR1Fhi6ExXjo=
|
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||||
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 h1:aVUu9fTY98ivBPKR9Y5w/AuzbMm96cd3YHRTU83I780=
|
k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b h1:MloQ9/bdJyIu9lb1PzujOPolHyvO06MXG5TUIj2mNAA=
|
||||||
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00/go.mod h1:AsvuZPBlUDVuCdzJ87iajxtXuR9oktsTctW/R9wwouA=
|
k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b/go.mod h1:UZ2yyWbFTpuhSbFhv24aGNOdoRdJZgsIObGBUaYVsts=
|
||||||
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
|
k8s.io/utils v0.0.0-20260319190234-28399d86e0b5 h1:kBawHLSnx/mYHmRnNUf9d4CpjREbeZuxoSGOX/J+aYM=
|
||||||
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
k8s.io/utils v0.0.0-20260319190234-28399d86e0b5/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=
|
||||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
|
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE=
|
||||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
|
||||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
|
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
|
||||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
|
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
|
||||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
|
||||||
|
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
|
||||||
|
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
|
||||||
|
|||||||
@@ -3,12 +3,63 @@ package encoding
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
|
||||||
"github.com/pelletier/go-toml/v2"
|
"github.com/pelletier/go-toml/v2"
|
||||||
|
"github.com/titanous/json5"
|
||||||
"github.com/zeromicro/go-zero/core/lang"
|
"github.com/zeromicro/go-zero/core/lang"
|
||||||
"gopkg.in/yaml.v2"
|
"gopkg.in/yaml.v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Json5ToJson converts JSON5 data into its JSON representation.
|
||||||
|
func Json5ToJson(data []byte) ([]byte, error) {
|
||||||
|
var val any
|
||||||
|
if err := json5.Unmarshal(data, &val); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate that there are no unsupported values like Infinity or NaN
|
||||||
|
if err := validateJSONCompatible(val); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return encodeToJSON(val)
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateJSONCompatible checks if the value can be represented in standard JSON.
|
||||||
|
// JSON5 allows Infinity and NaN, but standard JSON does not support these values.
|
||||||
|
func validateJSONCompatible(val any) error {
|
||||||
|
switch v := val.(type) {
|
||||||
|
case float64:
|
||||||
|
if math.IsInf(v, 0) {
|
||||||
|
return fmt.Errorf("JSON5 value Infinity cannot be represented in standard JSON")
|
||||||
|
}
|
||||||
|
if math.IsNaN(v) {
|
||||||
|
return fmt.Errorf("JSON5 value NaN cannot be represented in standard JSON")
|
||||||
|
}
|
||||||
|
case []any:
|
||||||
|
for _, item := range v {
|
||||||
|
if err := validateJSONCompatible(item); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case map[string]any:
|
||||||
|
for _, value := range v {
|
||||||
|
if err := validateJSONCompatible(value); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case map[any]any:
|
||||||
|
for _, value := range v {
|
||||||
|
if err := validateJSONCompatible(value); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// TomlToJson converts TOML data into its JSON representation.
|
// TomlToJson converts TOML data into its JSON representation.
|
||||||
func TomlToJson(data []byte) ([]byte, error) {
|
func TomlToJson(data []byte) ([]byte, error) {
|
||||||
var val any
|
var val any
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package encoding
|
package encoding
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"math"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
@@ -116,3 +117,142 @@ func TestYamlToJsonSlice(t *testing.T) {
|
|||||||
assert.Equal(t, `{"foo":["bar","baz"]}
|
assert.Equal(t, `{"foo":["bar","baz"]}
|
||||||
`, string(b))
|
`, string(b))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestJson5ToJson(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expect string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "standard json",
|
||||||
|
input: `{"a":"foo","b":1,"c":"${FOO}","d":"abcd!@#$112"}`,
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1,\"c\":\"${FOO}\",\"d\":\"abcd!@#$112\"}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 with comments",
|
||||||
|
input: `{/*comment*/"a":"foo","b":1}`,
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 with trailing commas",
|
||||||
|
input: `{"a":"foo","b":1,}`,
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 with unquoted keys",
|
||||||
|
input: `{a:"foo",b:1}`,
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 with single quotes",
|
||||||
|
input: `{"a":'foo',"b":1}`,
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 with line comments",
|
||||||
|
input: "{\n// This is a comment\n\"a\":\"foo\",\n\"b\":1\n}",
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "json5 all features combined",
|
||||||
|
input: "{\n// comment\na: 'foo', // trailing comma\nb: 1,\n}",
|
||||||
|
expect: "{\"a\":\"foo\",\"b\":1}\n",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
test := test
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
got, err := Json5ToJson([]byte(test.input))
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, test.expect, string(got))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJson5ToJsonError(t *testing.T) {
|
||||||
|
// Invalid JSON5: unquoted string value
|
||||||
|
_, err := Json5ToJson([]byte("{a: foo}"))
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJson5ToJsonInfinity(t *testing.T) {
|
||||||
|
// JSON5 allows Infinity but standard JSON does not
|
||||||
|
_, err := Json5ToJson([]byte(`{value: Infinity}`))
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "Infinity")
|
||||||
|
|
||||||
|
// Negative infinity
|
||||||
|
_, err = Json5ToJson([]byte(`{value: -Infinity}`))
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "Infinity")
|
||||||
|
|
||||||
|
// Infinity in array
|
||||||
|
_, err = Json5ToJson([]byte(`{values: [1, Infinity, 3]}`))
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "Infinity")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJson5ToJsonNaN(t *testing.T) {
|
||||||
|
// JSON5 allows NaN but standard JSON does not
|
||||||
|
_, err := Json5ToJson([]byte(`{value: NaN}`))
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "NaN")
|
||||||
|
|
||||||
|
// NaN in nested structure
|
||||||
|
_, err = Json5ToJson([]byte(`{nested: {value: NaN}}`))
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "NaN")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJson5ToJsonSlice(t *testing.T) {
|
||||||
|
b, err := Json5ToJson([]byte(`{
|
||||||
|
// comment
|
||||||
|
foo: [
|
||||||
|
'bar',
|
||||||
|
"baz", // trailing comma
|
||||||
|
],
|
||||||
|
}`))
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, `{"foo":["bar","baz"]}
|
||||||
|
`, string(b))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidateJSONCompatible(t *testing.T) {
|
||||||
|
// Test float64 types
|
||||||
|
assert.NoError(t, validateJSONCompatible(float64(1.5)))
|
||||||
|
assert.Error(t, validateJSONCompatible(math.Inf(1)))
|
||||||
|
assert.Error(t, validateJSONCompatible(math.Inf(-1)))
|
||||||
|
assert.Error(t, validateJSONCompatible(math.NaN()))
|
||||||
|
|
||||||
|
// Test arrays with invalid values
|
||||||
|
assert.Error(t, validateJSONCompatible([]any{1, math.Inf(1), 3}))
|
||||||
|
assert.Error(t, validateJSONCompatible([]any{1, math.NaN(), 3}))
|
||||||
|
assert.NoError(t, validateJSONCompatible([]any{1, 2, 3}))
|
||||||
|
|
||||||
|
// Test map[string]any with invalid values
|
||||||
|
assert.Error(t, validateJSONCompatible(map[string]any{"value": math.Inf(1)}))
|
||||||
|
assert.Error(t, validateJSONCompatible(map[string]any{"value": math.NaN()}))
|
||||||
|
assert.NoError(t, validateJSONCompatible(map[string]any{"value": 1.5}))
|
||||||
|
|
||||||
|
// Test map[any]any with invalid values
|
||||||
|
assert.Error(t, validateJSONCompatible(map[any]any{"value": math.Inf(1)}))
|
||||||
|
assert.Error(t, validateJSONCompatible(map[any]any{"value": math.NaN()}))
|
||||||
|
assert.NoError(t, validateJSONCompatible(map[any]any{"value": 1.5}))
|
||||||
|
|
||||||
|
// Test nested structures
|
||||||
|
assert.Error(t, validateJSONCompatible(map[string]any{
|
||||||
|
"nested": map[string]any{"value": math.Inf(1)},
|
||||||
|
}))
|
||||||
|
assert.Error(t, validateJSONCompatible([]any{
|
||||||
|
map[string]any{"value": math.NaN()},
|
||||||
|
}))
|
||||||
|
|
||||||
|
// Test valid values of various types
|
||||||
|
assert.NoError(t, validateJSONCompatible("string"))
|
||||||
|
assert.NoError(t, validateJSONCompatible(42))
|
||||||
|
assert.NoError(t, validateJSONCompatible(true))
|
||||||
|
assert.NoError(t, validateJSONCompatible(nil))
|
||||||
|
}
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ func AddProbe(probe Probe) {
|
|||||||
defaultHealthManager.addProbe(probe)
|
defaultHealthManager.addProbe(probe)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateHttpHandler create health http handler base on given probe.
|
// CreateHttpHandler creates a health http handler based on the given probe.
|
||||||
func CreateHttpHandler(healthResponse string) http.HandlerFunc {
|
func CreateHttpHandler(healthResponse string) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, _ *http.Request) {
|
return func(w http.ResponseWriter, _ *http.Request) {
|
||||||
if defaultHealthManager.IsReady() {
|
if defaultHealthManager.IsReady() {
|
||||||
|
|||||||
166
mcp/MIGRATION.md
Normal file
166
mcp/MIGRATION.md
Normal file
@@ -0,0 +1,166 @@
|
|||||||
|
# Migration to Official MCP SDK
|
||||||
|
|
||||||
|
This document describes the migration from the custom MCP implementation to the official [go-sdk](https://github.com/modelcontextprotocol/go-sdk).
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
|
||||||
|
Added the official MCP SDK:
|
||||||
|
```bash
|
||||||
|
go get github.com/modelcontextprotocol/go-sdk@v1.2.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type System
|
||||||
|
|
||||||
|
All types are now re-exported from the official SDK:
|
||||||
|
- `Tool` → `sdkmcp.Tool`
|
||||||
|
- `CallToolRequest` → `sdkmcp.CallToolRequest`
|
||||||
|
- `CallToolResult` → `sdkmcp.CallToolResult`
|
||||||
|
- Content types (`TextContent`, `ImageContent`, etc.)
|
||||||
|
- `Prompt`, `Resource`, `Server`, `ServerSession`
|
||||||
|
|
||||||
|
### Server Interface
|
||||||
|
|
||||||
|
The `McpServer` interface has been simplified:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type McpServer interface {
|
||||||
|
Start()
|
||||||
|
Stop()
|
||||||
|
Server() *sdkmcp.Server // Returns underlying SDK server
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important**: The `AddTool`, `AddPrompt`, and `AddResource` methods have been removed. Use the SDK directly:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Old (no longer supported)
|
||||||
|
server.AddTool(tool, handler)
|
||||||
|
|
||||||
|
// New (use SDK directly)
|
||||||
|
sdkmcp.AddTool(server.Server(), tool, handler)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Updated configuration structure:
|
||||||
|
- Removed: `ProtocolVersion`, `BaseUrl` (SDK manages these)
|
||||||
|
- Added: `UseStreamable` (choose between SSE and Streamable HTTP transport)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
mcp:
|
||||||
|
name: my-server
|
||||||
|
version: 1.0.0
|
||||||
|
useStreamable: false # false = SSE (2024-11-05), true = Streamable HTTP (2025-03-26)
|
||||||
|
sseEndpoint: /sse
|
||||||
|
messageEndpoint: /message
|
||||||
|
sseTimeout: 24h
|
||||||
|
messageTimeout: 30s
|
||||||
|
cors:
|
||||||
|
- http://localhost:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Registration
|
||||||
|
|
||||||
|
The SDK uses Go generics for type-safe tool registration:
|
||||||
|
|
||||||
|
```go
|
||||||
|
import sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"
|
||||||
|
|
||||||
|
type MyArgs struct {
|
||||||
|
Value string `json:"value" jsonschema:"description=Input value"`
|
||||||
|
}
|
||||||
|
|
||||||
|
tool := &mcp.Tool{
|
||||||
|
Name: "my_tool",
|
||||||
|
Description: "Description",
|
||||||
|
}
|
||||||
|
|
||||||
|
handler := func(ctx context.Context, req *mcp.CallToolRequest, args MyArgs) (*mcp.CallToolResult, any, error) {
|
||||||
|
return &mcp.CallToolResult{
|
||||||
|
Content: []mcp.Content{
|
||||||
|
&mcp.TextContent{Text: "Result"},
|
||||||
|
},
|
||||||
|
}, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register with explicit type parameters
|
||||||
|
sdkmcp.AddTool(server.Server(), tool, handler)
|
||||||
|
```
|
||||||
|
|
||||||
|
The SDK automatically generates JSON schemas from struct tags.
|
||||||
|
|
||||||
|
### Transport Support
|
||||||
|
|
||||||
|
Two transports are supported:
|
||||||
|
|
||||||
|
1. **SSE (Server-Sent Events)**: 2024-11-05 MCP spec
|
||||||
|
- Default (`UseStreamable: false`)
|
||||||
|
- Endpoint: `/sse` (configurable)
|
||||||
|
- Bidirectional: client sends messages to `/message`
|
||||||
|
|
||||||
|
2. **Streamable HTTP**: 2025-03-26 MCP spec
|
||||||
|
- Opt-in (`UseStreamable: true`)
|
||||||
|
- Endpoint: `/sse` (configurable)
|
||||||
|
- Newer protocol with improved streaming
|
||||||
|
|
||||||
|
### Example Migration
|
||||||
|
|
||||||
|
**Before:**
|
||||||
|
```go
|
||||||
|
server := mcp.NewMcpServer(c)
|
||||||
|
|
||||||
|
tool := &mcp.Tool{Name: "greet", Description: "Greet"}
|
||||||
|
handler := func(ctx context.Context, req *mcp.CallToolRequest, args GreetArgs) (*mcp.CallToolResult, any, error) {
|
||||||
|
return &mcp.CallToolResult{
|
||||||
|
Content: []mcp.Content{&mcp.TextContent{Text: "Hello"}},
|
||||||
|
}, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := server.AddTool(tool, handler); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**After:**
|
||||||
|
```go
|
||||||
|
import sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"
|
||||||
|
|
||||||
|
server := mcp.NewMcpServer(c)
|
||||||
|
|
||||||
|
tool := &mcp.Tool{Name: "greet", Description: "Greet"}
|
||||||
|
handler := func(ctx context.Context, req *mcp.CallToolRequest, args GreetArgs) (*mcp.CallToolResult, any, error) {
|
||||||
|
return &mcp.CallToolResult{
|
||||||
|
Content: []mcp.Content{&mcp.TextContent{Text: "Hello"}},
|
||||||
|
}, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use SDK directly - no error return
|
||||||
|
sdkmcp.AddTool(server.Server(), tool, handler)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
1. **Official SDK**: Uses the official Model Context Protocol SDK
|
||||||
|
2. **Type Safety**: Go generics provide compile-time type checking
|
||||||
|
3. **Auto Schema**: JSON schemas generated automatically from struct tags
|
||||||
|
4. **Dual Transport**: Supports both SSE and Streamable HTTP transports
|
||||||
|
5. **Maintained**: SDK is actively maintained by the MCP team
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
|
||||||
|
1. `server.AddTool()` removed → use `sdkmcp.AddTool(server.Server(), ...)`
|
||||||
|
2. `server.AddPrompt()` removed (SDK v1.2.0 limitation)
|
||||||
|
3. `server.AddResource()` removed (SDK v1.2.0 limitation)
|
||||||
|
4. Config fields `ProtocolVersion` and `BaseUrl` removed
|
||||||
|
5. All types now come from SDK (re-exported for convenience)
|
||||||
|
|
||||||
|
## Migration Checklist
|
||||||
|
|
||||||
|
- [ ] Update imports: add `sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"`
|
||||||
|
- [ ] Replace `server.AddTool()` with `sdkmcp.AddTool(server.Server(), ...)`
|
||||||
|
- [ ] Remove error handling for tool registration (SDK doesn't return errors)
|
||||||
|
- [ ] Update config: remove `ProtocolVersion` and `BaseUrl`, add `UseStreamable`
|
||||||
|
- [ ] Test with both SSE and Streamable transports
|
||||||
|
- [ ] Update documentation/examples
|
||||||
@@ -18,17 +18,16 @@ type McpConf struct {
|
|||||||
// Version is the server version reported in initialize responses
|
// Version is the server version reported in initialize responses
|
||||||
Version string `json:",default=1.0.0"`
|
Version string `json:",default=1.0.0"`
|
||||||
|
|
||||||
// ProtocolVersion is the MCP protocol version implemented
|
// UseStreamable when true uses Streamable HTTP transport (2025-03-26 spec),
|
||||||
ProtocolVersion string `json:",default=2024-11-05"`
|
// otherwise uses SSE transport (2024-11-05 spec)
|
||||||
|
UseStreamable bool `json:",default=false"`
|
||||||
// BaseUrl is the base URL for the server, used in SSE endpoint messages
|
|
||||||
// If not set, defaults to http://localhost:{Port}
|
|
||||||
BaseUrl string `json:",optional"`
|
|
||||||
|
|
||||||
// SseEndpoint is the path for Server-Sent Events connections
|
// SseEndpoint is the path for Server-Sent Events connections
|
||||||
|
// Used for SSE transport mode
|
||||||
SseEndpoint string `json:",default=/sse"`
|
SseEndpoint string `json:",default=/sse"`
|
||||||
|
|
||||||
// MessageEndpoint is the path for JSON-RPC requests
|
// MessageEndpoint is the path for JSON-RPC requests
|
||||||
|
// Used for Streamable HTTP transport mode
|
||||||
MessageEndpoint string `json:",default=/message"`
|
MessageEndpoint string `json:",default=/message"`
|
||||||
|
|
||||||
// Cors contains allowed CORS origins
|
// Cors contains allowed CORS origins
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func TestMcpConfDefaults(t *testing.T) {
|
func TestMcpConfDefaults(t *testing.T) {
|
||||||
// Test default values are set correctly when unmarshalled from JSON
|
// Test default values are set correctly
|
||||||
jsonConfig := `name: test-service
|
jsonConfig := `name: test-service
|
||||||
port: 8080
|
port: 8080
|
||||||
mcp:
|
mcp:
|
||||||
@@ -23,41 +23,8 @@ mcp:
|
|||||||
|
|
||||||
// Check default values
|
// Check default values
|
||||||
assert.Equal(t, "test-mcp-server", c.Mcp.Name)
|
assert.Equal(t, "test-mcp-server", c.Mcp.Name)
|
||||||
assert.Equal(t, "1.0.0", c.Mcp.Version, "Default version should be 1.0.0")
|
assert.Equal(t, "1.0.0", c.Mcp.Version)
|
||||||
assert.Equal(t, "2024-11-05", c.Mcp.ProtocolVersion, "Default protocol version should be 2024-11-05")
|
assert.Equal(t, "/sse", c.Mcp.SseEndpoint)
|
||||||
assert.Equal(t, "/sse", c.Mcp.SseEndpoint, "Default SSE endpoint should be /sse")
|
assert.Equal(t, "/message", c.Mcp.MessageEndpoint)
|
||||||
assert.Equal(t, "/message", c.Mcp.MessageEndpoint, "Default message endpoint should be /message")
|
assert.Equal(t, 30*time.Second, c.Mcp.MessageTimeout)
|
||||||
assert.Equal(t, 30*time.Second, c.Mcp.MessageTimeout, "Default message timeout should be 30s")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMcpConfCustomValues(t *testing.T) {
|
|
||||||
// Test custom values can be set
|
|
||||||
jsonConfig := `{
|
|
||||||
"Name": "test-service",
|
|
||||||
"Port": 8080,
|
|
||||||
"Mcp": {
|
|
||||||
"Name": "test-mcp-server",
|
|
||||||
"Version": "2.0.0",
|
|
||||||
"ProtocolVersion": "2025-01-01",
|
|
||||||
"BaseUrl": "http://example.com",
|
|
||||||
"SseEndpoint": "/custom-sse",
|
|
||||||
"MessageEndpoint": "/custom-message",
|
|
||||||
"Cors": ["http://localhost:3000", "http://example.com"],
|
|
||||||
"MessageTimeout": "60s"
|
|
||||||
}
|
|
||||||
}`
|
|
||||||
|
|
||||||
var c McpConf
|
|
||||||
err := conf.LoadFromJsonBytes([]byte(jsonConfig), &c)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
// Check custom values
|
|
||||||
assert.Equal(t, "test-mcp-server", c.Mcp.Name, "Name should be inherited from RestConf")
|
|
||||||
assert.Equal(t, "2.0.0", c.Mcp.Version, "Version should be customizable")
|
|
||||||
assert.Equal(t, "2025-01-01", c.Mcp.ProtocolVersion, "Protocol version should be customizable")
|
|
||||||
assert.Equal(t, "http://example.com", c.Mcp.BaseUrl, "BaseUrl should be customizable")
|
|
||||||
assert.Equal(t, "/custom-sse", c.Mcp.SseEndpoint, "SSE endpoint should be customizable")
|
|
||||||
assert.Equal(t, "/custom-message", c.Mcp.MessageEndpoint, "Message endpoint should be customizable")
|
|
||||||
assert.Equal(t, []string{"http://localhost:3000", "http://example.com"}, c.Mcp.Cors, "CORS settings should be customizable")
|
|
||||||
assert.Equal(t, 60*time.Second, c.Mcp.MessageTimeout, "Tool timeout should be customizable")
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,443 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net/http"
|
|
||||||
"net/http/httptest"
|
|
||||||
"sync"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
// syncResponseRecorder is a thread-safe wrapper around httptest.ResponseRecorder
|
|
||||||
type syncResponseRecorder struct {
|
|
||||||
*httptest.ResponseRecorder
|
|
||||||
mu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a new synchronized response recorder
|
|
||||||
func newSyncResponseRecorder() *syncResponseRecorder {
|
|
||||||
return &syncResponseRecorder{
|
|
||||||
ResponseRecorder: httptest.NewRecorder(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Override Write method to synchronize access
|
|
||||||
func (srr *syncResponseRecorder) Write(p []byte) (int, error) {
|
|
||||||
srr.mu.Lock()
|
|
||||||
defer srr.mu.Unlock()
|
|
||||||
return srr.ResponseRecorder.Write(p)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Override WriteHeader method to synchronize access
|
|
||||||
func (srr *syncResponseRecorder) WriteHeader(statusCode int) {
|
|
||||||
srr.mu.Lock()
|
|
||||||
defer srr.mu.Unlock()
|
|
||||||
srr.ResponseRecorder.WriteHeader(statusCode)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Override Result method to synchronize access
|
|
||||||
func (srr *syncResponseRecorder) Result() *http.Response {
|
|
||||||
srr.mu.Lock()
|
|
||||||
defer srr.mu.Unlock()
|
|
||||||
return srr.ResponseRecorder.Result()
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestHTTPHandlerIntegration tests the HTTP handlers with a real server instance
|
|
||||||
func TestHTTPHandlerIntegration(t *testing.T) {
|
|
||||||
// Skip in short test mode
|
|
||||||
if testing.Short() {
|
|
||||||
t.Skip("Skipping integration test in short mode")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a test configuration
|
|
||||||
conf := McpConf{}
|
|
||||||
conf.Mcp.Name = "test-integration"
|
|
||||||
conf.Mcp.Version = "1.0.0-test"
|
|
||||||
conf.Mcp.MessageTimeout = 1 * time.Second
|
|
||||||
|
|
||||||
// Create a mock server directly
|
|
||||||
server := &sseMcpServer{
|
|
||||||
conf: conf,
|
|
||||||
clients: make(map[string]*mcpClient),
|
|
||||||
tools: make(map[string]Tool),
|
|
||||||
prompts: make(map[string]Prompt),
|
|
||||||
resources: make(map[string]Resource),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register a test tool
|
|
||||||
err := server.RegisterTool(Tool{
|
|
||||||
Name: "echo",
|
|
||||||
Description: "Echo tool for testing",
|
|
||||||
InputSchema: InputSchema{
|
|
||||||
Properties: map[string]any{
|
|
||||||
"message": map[string]any{
|
|
||||||
"type": "string",
|
|
||||||
"description": "Message to echo",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Handler: func(ctx context.Context, params map[string]any) (any, error) {
|
|
||||||
if msg, ok := params["message"].(string); ok {
|
|
||||||
return fmt.Sprintf("Echo: %s", msg), nil
|
|
||||||
}
|
|
||||||
return "Echo: no message provided", nil
|
|
||||||
},
|
|
||||||
})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// Create a test HTTP request to the SSE endpoint
|
|
||||||
req := httptest.NewRequest("GET", "/sse", nil)
|
|
||||||
w := newSyncResponseRecorder()
|
|
||||||
|
|
||||||
// Create a done channel to signal completion of test
|
|
||||||
done := make(chan bool)
|
|
||||||
|
|
||||||
// Start the SSE handler in a goroutine
|
|
||||||
go func() {
|
|
||||||
// lock.Lock()
|
|
||||||
server.handleSSE(w, req)
|
|
||||||
// lock.Unlock()
|
|
||||||
done <- true
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Allow time for the handler to process
|
|
||||||
select {
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
// Expected - handler would normally block indefinitely
|
|
||||||
case <-done:
|
|
||||||
// This shouldn't happen immediately - the handler should block
|
|
||||||
t.Error("SSE handler returned unexpectedly")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check the initial headers
|
|
||||||
resp := w.Result()
|
|
||||||
assert.Equal(t, "chunked", resp.Header.Get("Transfer-Encoding"))
|
|
||||||
resp.Body.Close()
|
|
||||||
|
|
||||||
// The handler creates a client and sends the endpoint message
|
|
||||||
var sessionId string
|
|
||||||
|
|
||||||
// Give the handler time to set up the client
|
|
||||||
time.Sleep(50 * time.Millisecond)
|
|
||||||
|
|
||||||
// Check that a client was created
|
|
||||||
server.clientsLock.Lock()
|
|
||||||
assert.Equal(t, 1, len(server.clients))
|
|
||||||
for id := range server.clients {
|
|
||||||
sessionId = id
|
|
||||||
}
|
|
||||||
server.clientsLock.Unlock()
|
|
||||||
|
|
||||||
require.NotEmpty(t, sessionId, "Expected a session ID to be created")
|
|
||||||
|
|
||||||
// Now that we have a session ID, we can test the message endpoint
|
|
||||||
messageBody, _ := json.Marshal(Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 1,
|
|
||||||
Method: methodInitialize,
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
})
|
|
||||||
|
|
||||||
// Create a message request
|
|
||||||
reqURL := fmt.Sprintf("/message?%s=%s", sessionIdKey, sessionId)
|
|
||||||
msgReq := httptest.NewRequest("POST", reqURL, bytes.NewReader(messageBody))
|
|
||||||
msgW := newSyncResponseRecorder()
|
|
||||||
|
|
||||||
// Process the message
|
|
||||||
server.handleRequest(msgW, msgReq)
|
|
||||||
|
|
||||||
// Check the response
|
|
||||||
msgResp := msgW.Result()
|
|
||||||
assert.Equal(t, http.StatusAccepted, msgResp.StatusCode)
|
|
||||||
msgResp.Body.Close() // Ensure response body is closed
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestHandlerResponseFlow tests the flow of a full request/response cycle
|
|
||||||
func TestHandlerResponseFlow(t *testing.T) {
|
|
||||||
// Create a mock server for testing
|
|
||||||
server := &sseMcpServer{
|
|
||||||
conf: McpConf{},
|
|
||||||
clients: map[string]*mcpClient{
|
|
||||||
"test-session": {
|
|
||||||
id: "test-session",
|
|
||||||
channel: make(chan string, 10),
|
|
||||||
initialized: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
tools: make(map[string]Tool),
|
|
||||||
prompts: make(map[string]Prompt),
|
|
||||||
resources: make(map[string]Resource),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register test resources
|
|
||||||
server.RegisterTool(Tool{
|
|
||||||
Name: "test.tool",
|
|
||||||
Description: "Test tool",
|
|
||||||
InputSchema: InputSchema{Type: "object"},
|
|
||||||
Handler: func(ctx context.Context, params map[string]any) (any, error) {
|
|
||||||
return "tool result", nil
|
|
||||||
},
|
|
||||||
})
|
|
||||||
|
|
||||||
server.RegisterPrompt(Prompt{
|
|
||||||
Name: "test.prompt",
|
|
||||||
Description: "Test prompt",
|
|
||||||
})
|
|
||||||
|
|
||||||
server.RegisterResource(Resource{
|
|
||||||
Name: "test.resource",
|
|
||||||
URI: "http://example.com",
|
|
||||||
Description: "Test resource",
|
|
||||||
})
|
|
||||||
|
|
||||||
// Create a request with session ID parameter
|
|
||||||
reqURL := fmt.Sprintf("/message?%s=%s", sessionIdKey, "test-session")
|
|
||||||
|
|
||||||
// Test tools/list request
|
|
||||||
toolsListBody, _ := json.Marshal(Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 1,
|
|
||||||
Method: methodToolsList,
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
})
|
|
||||||
|
|
||||||
toolsReq := httptest.NewRequest("POST", reqURL, bytes.NewReader(toolsListBody))
|
|
||||||
toolsW := newSyncResponseRecorder()
|
|
||||||
|
|
||||||
// Process the request
|
|
||||||
server.handleRequest(toolsW, toolsReq)
|
|
||||||
|
|
||||||
// Check the response code
|
|
||||||
toolsResp := toolsW.Result()
|
|
||||||
assert.Equal(t, http.StatusAccepted, toolsResp.StatusCode)
|
|
||||||
toolsResp.Body.Close()
|
|
||||||
|
|
||||||
// Check the channel message
|
|
||||||
client := server.clients["test-session"]
|
|
||||||
select {
|
|
||||||
case message := <-client.channel:
|
|
||||||
assert.Contains(t, message, `"tools":[{"name":"test.tool"`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for tools/list response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test prompts/list request
|
|
||||||
promptsListBody, _ := json.Marshal(Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 2,
|
|
||||||
Method: methodPromptsList,
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
})
|
|
||||||
|
|
||||||
promptsReq := httptest.NewRequest("POST", reqURL, bytes.NewReader(promptsListBody))
|
|
||||||
promptsW := newSyncResponseRecorder()
|
|
||||||
|
|
||||||
// Process the request
|
|
||||||
server.handleRequest(promptsW, promptsReq)
|
|
||||||
|
|
||||||
// Check the response code
|
|
||||||
promptsResp := promptsW.Result()
|
|
||||||
assert.Equal(t, http.StatusAccepted, promptsResp.StatusCode)
|
|
||||||
promptsResp.Body.Close()
|
|
||||||
|
|
||||||
// Check the channel message
|
|
||||||
select {
|
|
||||||
case message := <-client.channel:
|
|
||||||
assert.Contains(t, message, `"prompts":[{"name":"test.prompt"`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for prompts/list response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test resources/list request
|
|
||||||
resourcesListBody, _ := json.Marshal(Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 3,
|
|
||||||
Method: methodResourcesList,
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
})
|
|
||||||
|
|
||||||
resourcesReq := httptest.NewRequest("POST", reqURL, bytes.NewReader(resourcesListBody))
|
|
||||||
resourcesW := newSyncResponseRecorder()
|
|
||||||
|
|
||||||
// Process the request
|
|
||||||
server.handleRequest(resourcesW, resourcesReq)
|
|
||||||
|
|
||||||
// Check the response code
|
|
||||||
resourcesResp := resourcesW.Result()
|
|
||||||
assert.Equal(t, http.StatusAccepted, resourcesResp.StatusCode)
|
|
||||||
resourcesResp.Body.Close()
|
|
||||||
|
|
||||||
// Check the channel message
|
|
||||||
select {
|
|
||||||
case message := <-client.channel:
|
|
||||||
assert.Contains(t, message, `"name":"test.resource"`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for resources/list response")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestProcessListMethods tests the list processing methods with pagination
|
|
||||||
func TestProcessListMethods(t *testing.T) {
|
|
||||||
server := &sseMcpServer{
|
|
||||||
tools: make(map[string]Tool),
|
|
||||||
prompts: make(map[string]Prompt),
|
|
||||||
resources: make(map[string]Resource),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add some test data
|
|
||||||
for i := 1; i <= 5; i++ {
|
|
||||||
tool := Tool{
|
|
||||||
Name: fmt.Sprintf("tool%d", i),
|
|
||||||
Description: fmt.Sprintf("Tool %d", i),
|
|
||||||
InputSchema: InputSchema{Type: "object"},
|
|
||||||
}
|
|
||||||
server.tools[tool.Name] = tool
|
|
||||||
|
|
||||||
prompt := Prompt{
|
|
||||||
Name: fmt.Sprintf("prompt%d", i),
|
|
||||||
Description: fmt.Sprintf("Prompt %d", i),
|
|
||||||
}
|
|
||||||
server.prompts[prompt.Name] = prompt
|
|
||||||
|
|
||||||
resource := Resource{
|
|
||||||
Name: fmt.Sprintf("resource%d", i),
|
|
||||||
URI: fmt.Sprintf("http://example.com/%d", i),
|
|
||||||
Description: fmt.Sprintf("Resource %d", i),
|
|
||||||
}
|
|
||||||
server.resources[resource.Name] = resource
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a test client
|
|
||||||
client := &mcpClient{
|
|
||||||
id: "test-client",
|
|
||||||
channel: make(chan string, 10),
|
|
||||||
initialized: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test processListTools
|
|
||||||
req := Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 1,
|
|
||||||
Method: methodToolsList,
|
|
||||||
Params: json.RawMessage(`{"cursor": "", "_meta": {"progressToken": "token1"}}`),
|
|
||||||
}
|
|
||||||
|
|
||||||
server.processListTools(context.Background(), client, req)
|
|
||||||
|
|
||||||
// Read response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"tools":`)
|
|
||||||
assert.Contains(t, response, `"progressToken":"token1"`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for tools/list response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test processListPrompts
|
|
||||||
req.ID = 2
|
|
||||||
req.Method = methodPromptsList
|
|
||||||
req.Params = json.RawMessage(`{"cursor": "next"}`)
|
|
||||||
server.processListPrompts(context.Background(), client, req)
|
|
||||||
|
|
||||||
// Read response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"prompts":`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for prompts/list response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test processListResources
|
|
||||||
req.ID = 3
|
|
||||||
req.Method = methodResourcesList
|
|
||||||
req.Params = json.RawMessage(`{"cursor": "next"}`)
|
|
||||||
server.processListResources(context.Background(), client, req)
|
|
||||||
|
|
||||||
// Read response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"resources":`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for resources/list response")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestErrorResponseHandling tests error handling in the server
|
|
||||||
func TestErrorResponseHandling(t *testing.T) {
|
|
||||||
server := &sseMcpServer{
|
|
||||||
tools: make(map[string]Tool),
|
|
||||||
prompts: make(map[string]Prompt),
|
|
||||||
resources: make(map[string]Resource),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a test client
|
|
||||||
client := &mcpClient{
|
|
||||||
id: "test-client",
|
|
||||||
channel: make(chan string, 10),
|
|
||||||
initialized: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test invalid method
|
|
||||||
req := Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 1,
|
|
||||||
Method: "invalid_method",
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mock handleRequest by directly calling error handler
|
|
||||||
server.sendErrorResponse(context.Background(), client, req.ID, "Method not found", errCodeMethodNotFound)
|
|
||||||
|
|
||||||
// Check response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"error":{"code":-32601,"message":"Method not found"}`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for error response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test invalid tool
|
|
||||||
toolReq := Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 2,
|
|
||||||
Method: methodToolsCall,
|
|
||||||
Params: json.RawMessage(`{"name":"non_existent_tool"}`),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call process method directly
|
|
||||||
server.processToolCall(context.Background(), client, toolReq)
|
|
||||||
|
|
||||||
// Check response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"error":{"code":-32602,"message":"Tool 'non_existent_tool' not found"}`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for error response")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test invalid prompt
|
|
||||||
promptReq := Request{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 3,
|
|
||||||
Method: methodPromptsGet,
|
|
||||||
Params: json.RawMessage(`{"name":"non_existent_prompt"}`),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call process method directly
|
|
||||||
server.processGetPrompt(context.Background(), client, promptReq)
|
|
||||||
|
|
||||||
// Check response
|
|
||||||
select {
|
|
||||||
case response := <-client.channel:
|
|
||||||
assert.Contains(t, response, `"error":{"code":-32602,"message":"Prompt 'non_existent_prompt' not found"}`)
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
t.Fatal("Timed out waiting for error response")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
33
mcp/options.go
Normal file
33
mcp/options.go
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
package mcp
|
||||||
|
|
||||||
|
import "net/http"
|
||||||
|
|
||||||
|
// RequestMetadataExtractor extracts request metadata for downstream handlers.
|
||||||
|
type RequestMetadataExtractor func(*http.Request) RequestMetadata
|
||||||
|
|
||||||
|
// McpOption customizes MCP server construction.
|
||||||
|
type McpOption interface {
|
||||||
|
apply(*serverOptions)
|
||||||
|
}
|
||||||
|
|
||||||
|
type mcpOptionFunc func(*serverOptions)
|
||||||
|
|
||||||
|
func (f mcpOptionFunc) apply(opts *serverOptions) {
|
||||||
|
f(opts)
|
||||||
|
}
|
||||||
|
|
||||||
|
type serverOptions struct {
|
||||||
|
requestMetadataExtractor RequestMetadataExtractor
|
||||||
|
}
|
||||||
|
|
||||||
|
func defaultServerOptions() serverOptions {
|
||||||
|
return serverOptions{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithRequestMetadataExtractor installs an extractor that runs for each incoming
|
||||||
|
// MCP HTTP request, and stores the extracted metadata into handler context.
|
||||||
|
func WithRequestMetadataExtractor(extractor RequestMetadataExtractor) McpOption {
|
||||||
|
return mcpOptionFunc(func(opts *serverOptions) {
|
||||||
|
opts.requestMetadataExtractor = extractor
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/mapping"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ParseArguments parses the arguments and populates the request object
|
|
||||||
func ParseArguments(args any, req any) error {
|
|
||||||
switch arguments := args.(type) {
|
|
||||||
case map[string]string:
|
|
||||||
m := make(map[string]any, len(arguments))
|
|
||||||
for k, v := range arguments {
|
|
||||||
m[k] = v
|
|
||||||
}
|
|
||||||
return mapping.UnmarshalJsonMap(m, req, mapping.WithStringValues())
|
|
||||||
case map[string]any:
|
|
||||||
return mapping.UnmarshalJsonMap(arguments, req)
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("unsupported argument type: %T", arguments)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,139 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TestParseArguments_MapStringString tests parsing map[string]string arguments
|
|
||||||
func TestParseArguments_MapStringString(t *testing.T) {
|
|
||||||
// Sample request struct to populate
|
|
||||||
type requestStruct struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
Message string `json:"message"`
|
|
||||||
Count int `json:"count"`
|
|
||||||
Enabled bool `json:"enabled"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create test arguments
|
|
||||||
args := map[string]string{
|
|
||||||
"name": "test-name",
|
|
||||||
"message": "hello world",
|
|
||||||
"count": "42",
|
|
||||||
"enabled": "true",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a target object to populate
|
|
||||||
var req requestStruct
|
|
||||||
|
|
||||||
// Parse the arguments
|
|
||||||
err := ParseArguments(args, &req)
|
|
||||||
|
|
||||||
// Verify results
|
|
||||||
assert.NoError(t, err, "Should parse map[string]string without error")
|
|
||||||
assert.Equal(t, "test-name", req.Name, "Name should be correctly parsed")
|
|
||||||
assert.Equal(t, "hello world", req.Message, "Message should be correctly parsed")
|
|
||||||
assert.Equal(t, 42, req.Count, "Count should be correctly parsed to int")
|
|
||||||
assert.True(t, req.Enabled, "Enabled should be correctly parsed to bool")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestParseArguments_MapStringAny tests parsing map[string]any arguments
|
|
||||||
func TestParseArguments_MapStringAny(t *testing.T) {
|
|
||||||
// Sample request struct to populate
|
|
||||||
type requestStruct struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
Message string `json:"message"`
|
|
||||||
Count int `json:"count"`
|
|
||||||
Enabled bool `json:"enabled"`
|
|
||||||
Tags []string `json:"tags"`
|
|
||||||
Metadata map[string]string `json:"metadata"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create test arguments with mixed types
|
|
||||||
args := map[string]any{
|
|
||||||
"name": "test-name",
|
|
||||||
"message": "hello world",
|
|
||||||
"count": 42, // note: this is already an int
|
|
||||||
"enabled": true, // note: this is already a bool
|
|
||||||
"tags": []string{"tag1", "tag2"},
|
|
||||||
"metadata": map[string]string{
|
|
||||||
"key1": "value1",
|
|
||||||
"key2": "value2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a target object to populate
|
|
||||||
var req requestStruct
|
|
||||||
|
|
||||||
// Parse the arguments
|
|
||||||
err := ParseArguments(args, &req)
|
|
||||||
|
|
||||||
// Verify results
|
|
||||||
assert.NoError(t, err, "Should parse map[string]any without error")
|
|
||||||
assert.Equal(t, "test-name", req.Name, "Name should be correctly parsed")
|
|
||||||
assert.Equal(t, "hello world", req.Message, "Message should be correctly parsed")
|
|
||||||
assert.Equal(t, 42, req.Count, "Count should be correctly parsed")
|
|
||||||
assert.True(t, req.Enabled, "Enabled should be correctly parsed")
|
|
||||||
assert.Equal(t, []string{"tag1", "tag2"}, req.Tags, "Tags should be correctly parsed")
|
|
||||||
assert.Equal(t, map[string]string{
|
|
||||||
"key1": "value1",
|
|
||||||
"key2": "value2",
|
|
||||||
}, req.Metadata, "Metadata should be correctly parsed")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestParseArguments_UnsupportedType tests parsing with an unsupported type
|
|
||||||
func TestParseArguments_UnsupportedType(t *testing.T) {
|
|
||||||
// Sample request struct to populate
|
|
||||||
type requestStruct struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
Message string `json:"message"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use an unsupported argument type (slice)
|
|
||||||
args := []string{"not", "a", "map"}
|
|
||||||
|
|
||||||
// Create a target object to populate
|
|
||||||
var req requestStruct
|
|
||||||
|
|
||||||
// Parse the arguments
|
|
||||||
err := ParseArguments(args, &req)
|
|
||||||
|
|
||||||
// Verify error is returned with correct message
|
|
||||||
assert.Error(t, err, "Should return error for unsupported type")
|
|
||||||
assert.Contains(t, err.Error(), "unsupported argument type", "Error should mention unsupported type")
|
|
||||||
assert.Contains(t, err.Error(), "[]string", "Error should include the actual type")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestParseArguments_EmptyMap tests parsing with empty maps
|
|
||||||
func TestParseArguments_EmptyMap(t *testing.T) {
|
|
||||||
// Sample request struct to populate
|
|
||||||
type requestStruct struct {
|
|
||||||
Name string `json:"name,optional"`
|
|
||||||
Message string `json:"message,optional"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test empty map[string]string
|
|
||||||
t.Run("EmptyMapStringString", func(t *testing.T) {
|
|
||||||
args := map[string]string{}
|
|
||||||
var req requestStruct
|
|
||||||
|
|
||||||
err := ParseArguments(args, &req)
|
|
||||||
|
|
||||||
assert.NoError(t, err, "Should parse empty map[string]string without error")
|
|
||||||
assert.Empty(t, req.Name, "Name should be empty string")
|
|
||||||
assert.Empty(t, req.Message, "Message should be empty string")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test empty map[string]any
|
|
||||||
t.Run("EmptyMapStringAny", func(t *testing.T) {
|
|
||||||
args := map[string]any{}
|
|
||||||
var req requestStruct
|
|
||||||
|
|
||||||
err := ParseArguments(args, &req)
|
|
||||||
|
|
||||||
assert.NoError(t, err, "Should parse empty map[string]any without error")
|
|
||||||
assert.Empty(t, req.Name, "Name should be empty string")
|
|
||||||
assert.Empty(t, req.Message, "Message should be empty string")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
1042
mcp/readme.md
1042
mcp/readme.md
File diff suppressed because it is too large
Load Diff
150
mcp/request_metadata.go
Normal file
150
mcp/request_metadata.go
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
package mcp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/zeromicro/go-zero/rest/pathvar"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RequestMetadata carries selected request-scoped values into MCP handlers.
|
||||||
|
type RequestMetadata struct {
|
||||||
|
Headers map[string][]string
|
||||||
|
Query map[string][]string
|
||||||
|
Path map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
type requestMetadataCtxKey struct{}
|
||||||
|
|
||||||
|
// RequestMetadataFromContext returns metadata extracted at the transport boundary.
|
||||||
|
func RequestMetadataFromContext(ctx context.Context) (RequestMetadata, bool) {
|
||||||
|
metadata, ok := requestMetadataFromContext(ctx)
|
||||||
|
if !ok {
|
||||||
|
return RequestMetadata{}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
return normalizeRequestMetadata(metadata), true
|
||||||
|
}
|
||||||
|
|
||||||
|
// HeaderFromContext returns the first header value for key.
|
||||||
|
func HeaderFromContext(ctx context.Context, key string) (string, bool) {
|
||||||
|
metadata, ok := requestMetadataFromContext(ctx)
|
||||||
|
if !ok {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
vals := metadata.Headers[http.CanonicalHeaderKey(key)]
|
||||||
|
if len(vals) == 0 {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
return vals[0], true
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryFromContext returns the first query value for key.
|
||||||
|
func QueryFromContext(ctx context.Context, key string) (string, bool) {
|
||||||
|
metadata, ok := requestMetadataFromContext(ctx)
|
||||||
|
if !ok {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
vals := metadata.Query[key]
|
||||||
|
if len(vals) == 0 {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
return vals[0], true
|
||||||
|
}
|
||||||
|
|
||||||
|
// PathFromContext returns the path variable value for key.
|
||||||
|
func PathFromContext(ctx context.Context, key string) (string, bool) {
|
||||||
|
metadata, ok := requestMetadataFromContext(ctx)
|
||||||
|
if !ok {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
val, ok := metadata.Path[key]
|
||||||
|
if !ok {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
return val, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func requestMetadataFromContext(ctx context.Context) (RequestMetadata, bool) {
|
||||||
|
metadata, ok := ctx.Value(requestMetadataCtxKey{}).(RequestMetadata)
|
||||||
|
if !ok {
|
||||||
|
return RequestMetadata{}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
return metadata, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultRequestMetadataExtractor extracts headers, query values, and path variables.
|
||||||
|
func DefaultRequestMetadataExtractor(r *http.Request) RequestMetadata {
|
||||||
|
metadata := RequestMetadata{
|
||||||
|
Headers: make(map[string][]string, len(r.Header)),
|
||||||
|
Query: make(map[string][]string),
|
||||||
|
Path: clonePathVars(pathvar.Vars(r)),
|
||||||
|
}
|
||||||
|
|
||||||
|
for key, vals := range r.Header {
|
||||||
|
metadata.Headers[http.CanonicalHeaderKey(key)] = append([]string(nil), vals...)
|
||||||
|
}
|
||||||
|
|
||||||
|
if r.URL != nil {
|
||||||
|
for key, vals := range r.URL.Query() {
|
||||||
|
metadata.Query[key] = append([]string(nil), vals...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return metadata
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeRequestMetadata(metadata RequestMetadata) RequestMetadata {
|
||||||
|
return RequestMetadata{
|
||||||
|
Headers: cloneCanonicalHeaderValues(metadata.Headers),
|
||||||
|
Query: cloneHeaderValues(metadata.Query),
|
||||||
|
Path: clonePathVars(metadata.Path),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func cloneHeaderValues(values map[string][]string) map[string][]string {
|
||||||
|
if len(values) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
cloned := make(map[string][]string, len(values))
|
||||||
|
for key, vals := range values {
|
||||||
|
cloned[key] = append([]string(nil), vals...)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cloned
|
||||||
|
}
|
||||||
|
|
||||||
|
func cloneCanonicalHeaderValues(values map[string][]string) map[string][]string {
|
||||||
|
if len(values) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
cloned := make(map[string][]string, len(values))
|
||||||
|
for key, vals := range values {
|
||||||
|
canonical := http.CanonicalHeaderKey(key)
|
||||||
|
cloned[canonical] = append(cloned[canonical], vals...)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cloned
|
||||||
|
}
|
||||||
|
|
||||||
|
func clonePathVars(values map[string]string) map[string]string {
|
||||||
|
if len(values) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
cloned := make(map[string]string, len(values))
|
||||||
|
for key, val := range values {
|
||||||
|
cloned[key] = val
|
||||||
|
}
|
||||||
|
|
||||||
|
return cloned
|
||||||
|
}
|
||||||
185
mcp/request_metadata_test.go
Normal file
185
mcp/request_metadata_test.go
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
package mcp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/zeromicro/go-zero/rest/pathvar"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestDefaultRequestMetadataExtractor(t *testing.T) {
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "/sse?tenant=t1&trace=abc", nil)
|
||||||
|
req.Header.Add("X-Tenant-Id", "tenant-from-header")
|
||||||
|
req = pathvar.WithVars(req, map[string]string{"tool": "sum"})
|
||||||
|
|
||||||
|
metadata := DefaultRequestMetadataExtractor(req)
|
||||||
|
header, ok := metadata.Headers["X-Tenant-Id"]
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, []string{"tenant-from-header"}, header)
|
||||||
|
assert.Equal(t, []string{"t1"}, metadata.Query["tenant"])
|
||||||
|
assert.Equal(t, "sum", metadata.Path["tool"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRequestMetadataContextHelpers(t *testing.T) {
|
||||||
|
ctx := context.WithValue(context.Background(), requestMetadataCtxKey{}, RequestMetadata{
|
||||||
|
Headers: map[string][]string{"X-Trace-Id": {"trace-1"}},
|
||||||
|
Query: map[string][]string{"tenant": {"foo"}},
|
||||||
|
Path: map[string]string{"scope": "prod"},
|
||||||
|
})
|
||||||
|
|
||||||
|
metadata, ok := RequestMetadataFromContext(ctx)
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, []string{"trace-1"}, metadata.Headers["X-Trace-Id"])
|
||||||
|
|
||||||
|
header, ok := HeaderFromContext(ctx, "x-trace-id")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "trace-1", header)
|
||||||
|
|
||||||
|
query, ok := QueryFromContext(ctx, "tenant")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "foo", query)
|
||||||
|
|
||||||
|
path, ok := PathFromContext(ctx, "scope")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "prod", path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRequestMetadataContextHelpersMissingKeys(t *testing.T) {
|
||||||
|
ctx := context.WithValue(context.Background(), requestMetadataCtxKey{}, RequestMetadata{
|
||||||
|
Headers: map[string][]string{"X-Trace-Id": {"trace-1"}},
|
||||||
|
Query: map[string][]string{"tenant": {"foo"}},
|
||||||
|
Path: map[string]string{"scope": "prod"},
|
||||||
|
})
|
||||||
|
|
||||||
|
_, ok := HeaderFromContext(ctx, "x-missing")
|
||||||
|
assert.False(t, ok)
|
||||||
|
|
||||||
|
_, ok = QueryFromContext(ctx, "missing")
|
||||||
|
assert.False(t, ok)
|
||||||
|
|
||||||
|
_, ok = PathFromContext(ctx, "missing")
|
||||||
|
assert.False(t, ok)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRequestMetadataFromContextNotFound(t *testing.T) {
|
||||||
|
_, ok := RequestMetadataFromContext(context.Background())
|
||||||
|
assert.False(t, ok)
|
||||||
|
|
||||||
|
_, ok = HeaderFromContext(context.Background(), "x-test")
|
||||||
|
assert.False(t, ok)
|
||||||
|
|
||||||
|
_, ok = QueryFromContext(context.Background(), "tenant")
|
||||||
|
assert.False(t, ok)
|
||||||
|
|
||||||
|
_, ok = PathFromContext(context.Background(), "tenant")
|
||||||
|
assert.False(t, ok)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWrapRequestMetadata(t *testing.T) {
|
||||||
|
s := &mcpServerImpl{
|
||||||
|
options: serverOptions{
|
||||||
|
requestMetadataExtractor: DefaultRequestMetadataExtractor,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
called := false
|
||||||
|
handler := s.wrapRequestMetadata(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
|
||||||
|
called = true
|
||||||
|
header, ok := HeaderFromContext(r.Context(), "x-tenant-id")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "tenant-1", header)
|
||||||
|
|
||||||
|
query, ok := QueryFromContext(r.Context(), "tenant")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "q-tenant", query)
|
||||||
|
}))
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "/sse?tenant=q-tenant", nil)
|
||||||
|
req.Header.Set("X-Tenant-Id", "tenant-1")
|
||||||
|
rr := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(rr, req)
|
||||||
|
|
||||||
|
assert.True(t, called)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWrapRequestMetadataNoExtractor(t *testing.T) {
|
||||||
|
s := &mcpServerImpl{}
|
||||||
|
|
||||||
|
called := false
|
||||||
|
handler := s.wrapRequestMetadata(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
|
||||||
|
called = true
|
||||||
|
_, ok := RequestMetadataFromContext(r.Context())
|
||||||
|
assert.False(t, ok)
|
||||||
|
}))
|
||||||
|
|
||||||
|
rr := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(rr, httptest.NewRequest(http.MethodGet, "/sse", nil))
|
||||||
|
|
||||||
|
assert.True(t, called)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWrapRequestMetadataCanonicalizesCustomHeaders(t *testing.T) {
|
||||||
|
s := &mcpServerImpl{
|
||||||
|
options: serverOptions{
|
||||||
|
requestMetadataExtractor: func(*http.Request) RequestMetadata {
|
||||||
|
return RequestMetadata{
|
||||||
|
Headers: map[string][]string{
|
||||||
|
"x-tenant-id": {"tenant-lower"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
called := false
|
||||||
|
handler := s.wrapRequestMetadata(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
|
||||||
|
called = true
|
||||||
|
header, ok := HeaderFromContext(r.Context(), "X-Tenant-Id")
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, "tenant-lower", header)
|
||||||
|
}))
|
||||||
|
|
||||||
|
rr := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(rr, httptest.NewRequest(http.MethodGet, "/sse", nil))
|
||||||
|
|
||||||
|
assert.True(t, called)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRequestMetadataFromContextReturnsCopy(t *testing.T) {
|
||||||
|
ctx := context.WithValue(context.Background(), requestMetadataCtxKey{}, RequestMetadata{
|
||||||
|
Headers: map[string][]string{"X-Trace-Id": {"trace-1"}},
|
||||||
|
})
|
||||||
|
|
||||||
|
metadata, ok := RequestMetadataFromContext(ctx)
|
||||||
|
assert.True(t, ok)
|
||||||
|
metadata.Headers["X-Trace-Id"][0] = "mutated"
|
||||||
|
metadata.Headers["X-New"] = []string{"new"}
|
||||||
|
|
||||||
|
fresh, ok := RequestMetadataFromContext(ctx)
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, []string{"trace-1"}, fresh.Headers["X-Trace-Id"])
|
||||||
|
assert.Nil(t, fresh.Headers["X-New"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRequestMetadataFromContextWithEmptyAndCanonicalizedHeaders(t *testing.T) {
|
||||||
|
emptyCtx := context.WithValue(context.Background(), requestMetadataCtxKey{}, RequestMetadata{})
|
||||||
|
empty, ok := RequestMetadataFromContext(emptyCtx)
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Nil(t, empty.Headers)
|
||||||
|
assert.Nil(t, empty.Query)
|
||||||
|
assert.Nil(t, empty.Path)
|
||||||
|
|
||||||
|
ctx := context.WithValue(context.Background(), requestMetadataCtxKey{}, RequestMetadata{
|
||||||
|
Headers: map[string][]string{
|
||||||
|
"x-tenant-id": {"a"},
|
||||||
|
"X-Tenant-Id": {"b"},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
metadata, ok := RequestMetadataFromContext(ctx)
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.Equal(t, []string{"a", "b"}, metadata.Headers["X-Tenant-Id"])
|
||||||
|
}
|
||||||
999
mcp/server.go
999
mcp/server.go
File diff suppressed because it is too large
Load Diff
3856
mcp/server_test.go
3856
mcp/server_test.go
File diff suppressed because it is too large
Load Diff
395
mcp/types.go
395
mcp/types.go
@@ -2,316 +2,99 @@ package mcp
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/rest"
|
sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"
|
||||||
|
"github.com/zeromicro/go-zero/core/logx"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Cursor is an opaque token used for pagination
|
// Re-export commonly used SDK types for convenience
|
||||||
type Cursor string
|
type (
|
||||||
|
// Tool types
|
||||||
|
Tool = sdkmcp.Tool
|
||||||
|
CallToolParams = sdkmcp.CallToolParams
|
||||||
|
CallToolResult = sdkmcp.CallToolResult
|
||||||
|
CallToolRequest = sdkmcp.CallToolRequest
|
||||||
|
|
||||||
// Request represents a generic MCP request following JSON-RPC 2.0 specification
|
// Content types
|
||||||
type Request struct {
|
Content = sdkmcp.Content
|
||||||
SessionId string `form:"session_id"` // Session identifier for client tracking
|
TextContent = sdkmcp.TextContent
|
||||||
JsonRpc string `json:"jsonrpc"` // Must be "2.0" per JSON-RPC spec
|
ImageContent = sdkmcp.ImageContent
|
||||||
ID any `json:"id"` // Request identifier for matching responses
|
AudioContent = sdkmcp.AudioContent
|
||||||
Method string `json:"method"` // Method name to invoke
|
|
||||||
Params json.RawMessage `json:"params"` // Parameters for the method
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r Request) isNotification() (bool, error) {
|
// Prompt types
|
||||||
switch val := r.ID.(type) {
|
Prompt = sdkmcp.Prompt
|
||||||
case int:
|
PromptMessage = sdkmcp.PromptMessage
|
||||||
return val == 0, nil
|
GetPromptParams = sdkmcp.GetPromptParams
|
||||||
case int64:
|
GetPromptResult = sdkmcp.GetPromptResult
|
||||||
return val == 0, nil
|
|
||||||
case float64:
|
// Resource types
|
||||||
return val == 0.0, nil
|
Resource = sdkmcp.Resource
|
||||||
case string:
|
ResourceContents = sdkmcp.ResourceContents
|
||||||
return len(val) == 0, nil
|
ReadResourceParams = sdkmcp.ReadResourceParams
|
||||||
case nil:
|
ReadResourceResult = sdkmcp.ReadResourceResult
|
||||||
return true, nil
|
|
||||||
default:
|
// Session and server types
|
||||||
return false, fmt.Errorf("invalid type %T", val)
|
Server = sdkmcp.Server
|
||||||
|
ServerSession = sdkmcp.ServerSession
|
||||||
|
ServerOptions = sdkmcp.ServerOptions
|
||||||
|
Implementation = sdkmcp.Implementation
|
||||||
|
|
||||||
|
// Transport types
|
||||||
|
SSEHandler = sdkmcp.SSEHandler
|
||||||
|
StreamableHTTPHandler = sdkmcp.StreamableHTTPHandler
|
||||||
|
)
|
||||||
|
|
||||||
|
// ToolHandler is a generic function signature for tool handlers.
|
||||||
|
// Handlers should accept context, request, and typed arguments, and return
|
||||||
|
// a result, metadata, and error.
|
||||||
|
//
|
||||||
|
// Deprecated: Use ToolHandlerFor directly from the SDK types.
|
||||||
|
type ToolHandler[Args any, Meta any] func(
|
||||||
|
ctx context.Context,
|
||||||
|
req *CallToolRequest,
|
||||||
|
args Args,
|
||||||
|
) (*CallToolResult, Meta, error)
|
||||||
|
|
||||||
|
// PromptHandler is a function signature for prompt handlers.
|
||||||
|
type PromptHandler func(
|
||||||
|
ctx context.Context,
|
||||||
|
req *sdkmcp.GetPromptRequest,
|
||||||
|
args map[string]string,
|
||||||
|
) (*GetPromptResult, error)
|
||||||
|
|
||||||
|
// ResourceHandler is a function signature for resource handlers.
|
||||||
|
type ResourceHandler func(
|
||||||
|
ctx context.Context,
|
||||||
|
req *sdkmcp.ReadResourceRequest,
|
||||||
|
uri string,
|
||||||
|
) (*ReadResourceResult, error)
|
||||||
|
|
||||||
|
// AddTool registers a tool with the MCP server using type-safe generics.
|
||||||
|
// The SDK automatically generates JSON schema from the Args struct tags.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// type GreetArgs struct {
|
||||||
|
// Name string `json:"name" jsonschema:"description=Name to greet"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// tool := &mcp.Tool{
|
||||||
|
// Name: "greet",
|
||||||
|
// Description: "Greet someone",
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// handler := func(ctx context.Context, req *mcp.CallToolRequest, args GreetArgs) (*mcp.CallToolResult, any, error) {
|
||||||
|
// return &mcp.CallToolResult{
|
||||||
|
// Content: []mcp.Content{&mcp.TextContent{Text: "Hello " + args.Name}},
|
||||||
|
// }, nil, nil
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// mcp.AddTool(server, tool, handler)
|
||||||
|
func AddTool[In, Out any](server McpServer, tool *Tool, handler func(context.Context, *CallToolRequest, In) (*CallToolResult, Out, error)) {
|
||||||
|
// Access internal server - only works with mcpServerImpl
|
||||||
|
if impl, ok := server.(*mcpServerImpl); ok {
|
||||||
|
sdkmcp.AddTool(impl.mcpServer, tool, handler)
|
||||||
|
} else {
|
||||||
|
logx.Error("AddTool: server must be of type *mcpServerImpl to use this helper")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type PaginatedParams struct {
|
|
||||||
Cursor string `json:"cursor"`
|
|
||||||
Meta struct {
|
|
||||||
ProgressToken any `json:"progressToken"`
|
|
||||||
} `json:"_meta"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Result is the base interface for all results
|
|
||||||
type Result struct {
|
|
||||||
Meta map[string]any `json:"_meta,omitempty"` // Optional metadata
|
|
||||||
}
|
|
||||||
|
|
||||||
// PaginatedResult is a base for results that support pagination
|
|
||||||
type PaginatedResult struct {
|
|
||||||
Result
|
|
||||||
NextCursor Cursor `json:"nextCursor,omitempty"` // Opaque token for fetching next page
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListToolsResult represents the response to a tools/list request
|
|
||||||
type ListToolsResult struct {
|
|
||||||
PaginatedResult
|
|
||||||
Tools []Tool `json:"tools"` // List of available tools
|
|
||||||
}
|
|
||||||
|
|
||||||
// Message Content Types
|
|
||||||
|
|
||||||
// RoleType represents the sender or recipient of messages in a conversation
|
|
||||||
type RoleType string
|
|
||||||
|
|
||||||
// PromptArgument defines a single argument that can be passed to a prompt
|
|
||||||
type PromptArgument struct {
|
|
||||||
Name string `json:"name"` // Argument name
|
|
||||||
Description string `json:"description,omitempty"` // Human-readable description
|
|
||||||
Required bool `json:"required,omitempty"` // Whether this argument is required
|
|
||||||
}
|
|
||||||
|
|
||||||
// PromptHandler is a function that dynamically generates prompt content
|
|
||||||
type PromptHandler func(ctx context.Context, args map[string]string) ([]PromptMessage, error)
|
|
||||||
|
|
||||||
// Prompt represents an MCP Prompt definition
|
|
||||||
type Prompt struct {
|
|
||||||
Name string `json:"name"` // Unique identifier for the prompt
|
|
||||||
Description string `json:"description,omitempty"` // Human-readable description
|
|
||||||
Arguments []PromptArgument `json:"arguments,omitempty"` // Arguments for customization
|
|
||||||
Content string `json:"-"` // Static content (internal use only)
|
|
||||||
Handler PromptHandler `json:"-"` // Handler for dynamic content generation
|
|
||||||
}
|
|
||||||
|
|
||||||
// PromptMessage represents a message in a conversation
|
|
||||||
type PromptMessage struct {
|
|
||||||
Role RoleType `json:"role"` // Message sender role
|
|
||||||
Content any `json:"content"` // Message content (TextContent, ImageContent, etc.)
|
|
||||||
}
|
|
||||||
|
|
||||||
// TextContent represents text content in a message
|
|
||||||
type TextContent struct {
|
|
||||||
Text string `json:"text"` // The text content
|
|
||||||
Annotations *Annotations `json:"annotations,omitempty"` // Optional annotations
|
|
||||||
}
|
|
||||||
|
|
||||||
type typedTextContent struct {
|
|
||||||
Type string `json:"type"`
|
|
||||||
TextContent
|
|
||||||
}
|
|
||||||
|
|
||||||
// ImageContent represents image data in a message
|
|
||||||
type ImageContent struct {
|
|
||||||
Data string `json:"data"` // Base64-encoded image data
|
|
||||||
MimeType string `json:"mimeType"` // MIME type (e.g., "image/png")
|
|
||||||
}
|
|
||||||
|
|
||||||
type typedImageContent struct {
|
|
||||||
Type string `json:"type"`
|
|
||||||
ImageContent
|
|
||||||
}
|
|
||||||
|
|
||||||
// AudioContent represents audio data in a message
|
|
||||||
type AudioContent struct {
|
|
||||||
Data string `json:"data"` // Base64-encoded audio data
|
|
||||||
MimeType string `json:"mimeType"` // MIME type (e.g., "audio/mp3")
|
|
||||||
}
|
|
||||||
|
|
||||||
type typedAudioContent struct {
|
|
||||||
Type string `json:"type"`
|
|
||||||
AudioContent
|
|
||||||
}
|
|
||||||
|
|
||||||
// FileContent represents file content
|
|
||||||
type FileContent struct {
|
|
||||||
URI string `json:"uri"` // URI identifying the file
|
|
||||||
MimeType string `json:"mimeType"` // MIME type of the file
|
|
||||||
Text string `json:"text"` // File content as text
|
|
||||||
}
|
|
||||||
|
|
||||||
// EmbeddedResource represents a resource embedded in a message
|
|
||||||
type EmbeddedResource struct {
|
|
||||||
Type string `json:"type"` // Always "resource"
|
|
||||||
Resource ResourceContent `json:"resource"` // The resource data
|
|
||||||
}
|
|
||||||
|
|
||||||
// Annotations provides additional metadata for content
|
|
||||||
type Annotations struct {
|
|
||||||
Audience []RoleType `json:"audience,omitempty"` // Who should see this content
|
|
||||||
Priority *float64 `json:"priority,omitempty"` // Optional priority (0-1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tool-related Types
|
|
||||||
|
|
||||||
// ToolHandler is a function that handles tool calls
|
|
||||||
type ToolHandler func(ctx context.Context, params map[string]any) (any, error)
|
|
||||||
|
|
||||||
// Tool represents a Model Context Protocol Tool definition
|
|
||||||
type Tool struct {
|
|
||||||
Name string `json:"name"` // Unique identifier for the tool
|
|
||||||
Description string `json:"description"` // Human-readable description
|
|
||||||
InputSchema InputSchema `json:"inputSchema"` // JSON Schema for parameters
|
|
||||||
Handler ToolHandler `json:"-"` // Not sent to clients
|
|
||||||
}
|
|
||||||
|
|
||||||
// InputSchema represents tool's input schema in JSON Schema format
|
|
||||||
type InputSchema struct {
|
|
||||||
Type string `json:"type"`
|
|
||||||
Properties map[string]any `json:"properties"` // Property definitions
|
|
||||||
Required []string `json:"required,omitempty"` // List of required properties
|
|
||||||
}
|
|
||||||
|
|
||||||
// CallToolResult represents a tool call result that conforms to the MCP schema
|
|
||||||
type CallToolResult struct {
|
|
||||||
Result
|
|
||||||
Content []any `json:"content"` // Content items (text, images, etc.)
|
|
||||||
IsError bool `json:"isError,omitempty"` // True if tool execution failed
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resource represents a Model Context Protocol Resource definition
|
|
||||||
type Resource struct {
|
|
||||||
URI string `json:"uri"` // Unique resource identifier (RFC3986)
|
|
||||||
Name string `json:"name"` // Human-readable name
|
|
||||||
Description string `json:"description,omitempty"` // Optional description
|
|
||||||
MimeType string `json:"mimeType,omitempty"` // Optional MIME type
|
|
||||||
Handler ResourceHandler `json:"-"` // Internal handler not sent to clients
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourceHandler is a function that handles resource read requests
|
|
||||||
type ResourceHandler func(ctx context.Context) (ResourceContent, error)
|
|
||||||
|
|
||||||
// ResourceContent represents the content of a resource
|
|
||||||
type ResourceContent struct {
|
|
||||||
URI string `json:"uri"` // Resource URI (required)
|
|
||||||
MimeType string `json:"mimeType,omitempty"` // MIME type of the resource
|
|
||||||
Text string `json:"text,omitempty"` // Text content (if available)
|
|
||||||
Blob string `json:"blob,omitempty"` // Base64 encoded blob data (if available)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourcesListResult represents the response to a resources/list request
|
|
||||||
type ResourcesListResult struct {
|
|
||||||
PaginatedResult
|
|
||||||
Resources []Resource `json:"resources"` // List of available resources
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourceReadParams contains parameters for a resources/read request
|
|
||||||
type ResourceReadParams struct {
|
|
||||||
URI string `json:"uri"` // URI of the resource to read
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourceReadResult contains the result of a resources/read request
|
|
||||||
type ResourceReadResult struct {
|
|
||||||
Result
|
|
||||||
Contents []ResourceContent `json:"contents"` // Array of resource content
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourceSubscribeParams contains parameters for a resources/subscribe request
|
|
||||||
type ResourceSubscribeParams struct {
|
|
||||||
URI string `json:"uri"` // URI of the resource to subscribe to
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResourceUpdateNotification represents a notification about a resource update
|
|
||||||
type ResourceUpdateNotification struct {
|
|
||||||
URI string `json:"uri"` // URI of the updated resource
|
|
||||||
Content ResourceContent `json:"content"` // New resource content
|
|
||||||
}
|
|
||||||
|
|
||||||
// Client and Server Types
|
|
||||||
|
|
||||||
// mcpClient represents an SSE client connection
|
|
||||||
type mcpClient struct {
|
|
||||||
id string // Unique client identifier
|
|
||||||
channel chan string // Channel for sending SSE messages
|
|
||||||
initialized bool // Tracks if client has sent notifications/initialized
|
|
||||||
}
|
|
||||||
|
|
||||||
// McpServer defines the interface for Model Context Protocol servers
|
|
||||||
type McpServer interface {
|
|
||||||
Start()
|
|
||||||
Stop()
|
|
||||||
RegisterTool(tool Tool) error
|
|
||||||
RegisterPrompt(prompt Prompt)
|
|
||||||
RegisterResource(resource Resource)
|
|
||||||
}
|
|
||||||
|
|
||||||
// sseMcpServer implements the McpServer interface using SSE
|
|
||||||
type sseMcpServer struct {
|
|
||||||
conf McpConf
|
|
||||||
server *rest.Server
|
|
||||||
clients map[string]*mcpClient
|
|
||||||
clientsLock sync.Mutex
|
|
||||||
tools map[string]Tool
|
|
||||||
toolsLock sync.Mutex
|
|
||||||
prompts map[string]Prompt
|
|
||||||
promptsLock sync.Mutex
|
|
||||||
resources map[string]Resource
|
|
||||||
resourcesLock sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// Response Types
|
|
||||||
|
|
||||||
// errorObj represents a JSON-RPC error object
|
|
||||||
type errorObj struct {
|
|
||||||
Code int `json:"code"` // Error code
|
|
||||||
Message string `json:"message"` // Error message
|
|
||||||
}
|
|
||||||
|
|
||||||
// Response represents a JSON-RPC response
|
|
||||||
type Response struct {
|
|
||||||
JsonRpc string `json:"jsonrpc"` // Always "2.0"
|
|
||||||
ID any `json:"id"` // Same as request ID
|
|
||||||
Result any `json:"result"` // Result object (null if error)
|
|
||||||
Error *errorObj `json:"error,omitempty"` // Error object (null if success)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Server Information Types
|
|
||||||
|
|
||||||
// serverInfo provides information about the server
|
|
||||||
type serverInfo struct {
|
|
||||||
Name string `json:"name"` // Server name
|
|
||||||
Version string `json:"version"` // Server version
|
|
||||||
}
|
|
||||||
|
|
||||||
// capabilities describes the server's capabilities
|
|
||||||
type capabilities struct {
|
|
||||||
Logging struct{} `json:"logging"`
|
|
||||||
Prompts struct {
|
|
||||||
ListChanged bool `json:"listChanged"` // Server will notify on prompt changes
|
|
||||||
} `json:"prompts"`
|
|
||||||
Resources struct {
|
|
||||||
Subscribe bool `json:"subscribe"` // Server supports resource subscriptions
|
|
||||||
ListChanged bool `json:"listChanged"` // Server will notify on resource changes
|
|
||||||
} `json:"resources"`
|
|
||||||
Tools struct {
|
|
||||||
ListChanged bool `json:"listChanged"` // Server will notify on tool changes
|
|
||||||
} `json:"tools"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// initializationResponse is sent in response to an initialize request
|
|
||||||
type initializationResponse struct {
|
|
||||||
ProtocolVersion string `json:"protocolVersion"` // Protocol version
|
|
||||||
Capabilities capabilities `json:"capabilities"` // Server capabilities
|
|
||||||
ServerInfo serverInfo `json:"serverInfo"` // Server information
|
|
||||||
}
|
|
||||||
|
|
||||||
// ToolCallParams contains the parameters for a tool call
|
|
||||||
type ToolCallParams struct {
|
|
||||||
Name string `json:"name"` // Tool name
|
|
||||||
Parameters map[string]any `json:"parameters"` // Tool parameters
|
|
||||||
}
|
|
||||||
|
|
||||||
// ToolResult contains the result of a tool execution
|
|
||||||
type ToolResult struct {
|
|
||||||
Type string `json:"type"` // Content type (text, image, etc.)
|
|
||||||
Content any `json:"content"` // Result content
|
|
||||||
}
|
|
||||||
|
|
||||||
// errorMessage represents a detailed error message
|
|
||||||
type errorMessage struct {
|
|
||||||
Code int `json:"code"` // Error code
|
|
||||||
Message string `json:"message"` // Error message
|
|
||||||
Data any `json:",omitempty"` // Additional error data
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,271 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestResponseMarshaling(t *testing.T) {
|
|
||||||
// Test that the Response struct marshals correctly
|
|
||||||
resp := Response{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 123,
|
|
||||||
Result: map[string]string{
|
|
||||||
"key": "value",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := json.Marshal(resp)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"jsonrpc":"2.0"`)
|
|
||||||
assert.Contains(t, string(data), `"id":123`)
|
|
||||||
assert.Contains(t, string(data), `"result":{"key":"value"}`)
|
|
||||||
|
|
||||||
// Test response with error
|
|
||||||
respWithError := Response{
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: 456,
|
|
||||||
Error: &errorObj{
|
|
||||||
Code: errCodeInvalidRequest,
|
|
||||||
Message: "Invalid Request",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err = json.Marshal(respWithError)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"jsonrpc":"2.0"`)
|
|
||||||
assert.Contains(t, string(data), `"id":456`)
|
|
||||||
assert.Contains(t, string(data), `"error":{"code":-32600,"message":"Invalid Request"}`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRequestUnmarshaling(t *testing.T) {
|
|
||||||
// Test that the Request struct unmarshals correctly
|
|
||||||
jsonStr := `{
|
|
||||||
"jsonrpc": "2.0",
|
|
||||||
"id": 789,
|
|
||||||
"method": "test_method",
|
|
||||||
"params": {"key": "value"}
|
|
||||||
}`
|
|
||||||
|
|
||||||
var req Request
|
|
||||||
err := json.Unmarshal([]byte(jsonStr), &req)
|
|
||||||
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, "2.0", req.JsonRpc)
|
|
||||||
assert.Equal(t, float64(789), req.ID)
|
|
||||||
assert.Equal(t, "test_method", req.Method)
|
|
||||||
|
|
||||||
// Check params unmarshaled correctly
|
|
||||||
var params map[string]string
|
|
||||||
err = json.Unmarshal(req.Params, ¶ms)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, "value", params["key"])
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestToolStructs(t *testing.T) {
|
|
||||||
// Test Tool struct
|
|
||||||
tool := Tool{
|
|
||||||
Name: "test.tool",
|
|
||||||
Description: "A test tool",
|
|
||||||
InputSchema: InputSchema{
|
|
||||||
Type: "object",
|
|
||||||
Properties: map[string]any{
|
|
||||||
"input": map[string]any{
|
|
||||||
"type": "string",
|
|
||||||
"description": "Input parameter",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Required: []string{"input"},
|
|
||||||
},
|
|
||||||
Handler: func(ctx context.Context, params map[string]any) (any, error) {
|
|
||||||
return "result", nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify fields are correct
|
|
||||||
assert.Equal(t, "test.tool", tool.Name)
|
|
||||||
assert.Equal(t, "A test tool", tool.Description)
|
|
||||||
assert.Equal(t, "object", tool.InputSchema.Type)
|
|
||||||
assert.Contains(t, tool.InputSchema.Properties, "input")
|
|
||||||
propMap, ok := tool.InputSchema.Properties["input"].(map[string]any)
|
|
||||||
assert.True(t, ok, "Property should be a map")
|
|
||||||
assert.Equal(t, "string", propMap["type"])
|
|
||||||
assert.NotNil(t, tool.Handler)
|
|
||||||
|
|
||||||
// Verify JSON marshalling (which should exclude Handler function)
|
|
||||||
data, err := json.Marshal(tool)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"name":"test.tool"`)
|
|
||||||
assert.Contains(t, string(data), `"description":"A test tool"`)
|
|
||||||
assert.Contains(t, string(data), `"inputSchema":`)
|
|
||||||
assert.NotContains(t, string(data), `"Handler":`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPromptStructs(t *testing.T) {
|
|
||||||
// Test Prompt struct
|
|
||||||
prompt := Prompt{
|
|
||||||
Name: "test.prompt",
|
|
||||||
Description: "A test prompt description",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify fields are correct
|
|
||||||
assert.Equal(t, "test.prompt", prompt.Name)
|
|
||||||
assert.Equal(t, "A test prompt description", prompt.Description)
|
|
||||||
|
|
||||||
// Verify JSON marshalling
|
|
||||||
data, err := json.Marshal(prompt)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"name":"test.prompt"`)
|
|
||||||
assert.Contains(t, string(data), `"description":"A test prompt description"`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestResourceStructs(t *testing.T) {
|
|
||||||
// Test Resource struct
|
|
||||||
resource := Resource{
|
|
||||||
Name: "test.resource",
|
|
||||||
URI: "http://example.com/resource",
|
|
||||||
Description: "A test resource",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify fields are correct
|
|
||||||
assert.Equal(t, "test.resource", resource.Name)
|
|
||||||
assert.Equal(t, "http://example.com/resource", resource.URI)
|
|
||||||
assert.Equal(t, "A test resource", resource.Description)
|
|
||||||
|
|
||||||
// Verify JSON marshalling
|
|
||||||
data, err := json.Marshal(resource)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"name":"test.resource"`)
|
|
||||||
assert.Contains(t, string(data), `"uri":"http://example.com/resource"`)
|
|
||||||
assert.Contains(t, string(data), `"description":"A test resource"`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestContentTypes(t *testing.T) {
|
|
||||||
// Test TextContent
|
|
||||||
textContent := TextContent{
|
|
||||||
Text: "Sample text",
|
|
||||||
Annotations: &Annotations{
|
|
||||||
Audience: []RoleType{RoleUser, RoleAssistant},
|
|
||||||
Priority: ptr(1.0),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := json.Marshal(textContent)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"text":"Sample text"`)
|
|
||||||
assert.Contains(t, string(data), `"audience":["user","assistant"]`)
|
|
||||||
assert.Contains(t, string(data), `"priority":1`)
|
|
||||||
|
|
||||||
// Test ImageContent
|
|
||||||
imageContent := ImageContent{
|
|
||||||
Data: "base64data",
|
|
||||||
MimeType: "image/png",
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err = json.Marshal(imageContent)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"data":"base64data"`)
|
|
||||||
assert.Contains(t, string(data), `"mimeType":"image/png"`)
|
|
||||||
|
|
||||||
// Test AudioContent
|
|
||||||
audioContent := AudioContent{
|
|
||||||
Data: "base64audio",
|
|
||||||
MimeType: "audio/mp3",
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err = json.Marshal(audioContent)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"data":"base64audio"`)
|
|
||||||
assert.Contains(t, string(data), `"mimeType":"audio/mp3"`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCallToolResult(t *testing.T) {
|
|
||||||
// Test CallToolResult
|
|
||||||
result := CallToolResult{
|
|
||||||
Result: Result{
|
|
||||||
Meta: map[string]any{
|
|
||||||
"progressToken": "token123",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Content: []interface{}{
|
|
||||||
TextContent{
|
|
||||||
Text: "Sample result",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
IsError: false,
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := json.Marshal(result)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"_meta":{"progressToken":"token123"}`)
|
|
||||||
assert.Contains(t, string(data), `"content":[{"text":"Sample result"}]`)
|
|
||||||
assert.NotContains(t, string(data), `"isError":`)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRequest_isNotification(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
id any
|
|
||||||
want bool
|
|
||||||
wantErr error
|
|
||||||
}{
|
|
||||||
// integer test cases
|
|
||||||
{name: "int zero", id: 0, want: true, wantErr: nil},
|
|
||||||
{name: "int non-zero", id: 1, want: false, wantErr: nil},
|
|
||||||
{name: "int64 zero", id: int64(0), want: true, wantErr: nil},
|
|
||||||
{name: "int64 max", id: int64(9223372036854775807), want: false, wantErr: nil},
|
|
||||||
|
|
||||||
// floating point number test cases
|
|
||||||
{name: "float64 zero", id: float64(0.0), want: true, wantErr: nil},
|
|
||||||
{name: "float64 positive", id: float64(0.000001), want: false, wantErr: nil},
|
|
||||||
{name: "float64 negative", id: float64(-0.000001), want: false, wantErr: nil},
|
|
||||||
{name: "float64 epsilon", id: float64(1e-300), want: false, wantErr: nil},
|
|
||||||
|
|
||||||
// string test cases
|
|
||||||
{name: "empty string", id: "", want: true, wantErr: nil},
|
|
||||||
{name: "non-empty string", id: "abc", want: false, wantErr: nil},
|
|
||||||
{name: "space string", id: " ", want: false, wantErr: nil},
|
|
||||||
{name: "unicode string", id: "こんにちは", want: false, wantErr: nil},
|
|
||||||
|
|
||||||
// special cases
|
|
||||||
{name: "nil", id: nil, want: true, wantErr: nil},
|
|
||||||
|
|
||||||
// logical type test cases
|
|
||||||
{name: "bool true", id: true, want: false, wantErr: errors.New("invalid type bool")},
|
|
||||||
{name: "bool false", id: false, want: false, wantErr: errors.New("invalid type bool")},
|
|
||||||
{name: "struct type", id: struct{}{}, want: false, wantErr: errors.New("invalid type struct {}")},
|
|
||||||
{name: "slice type", id: []int{1, 2, 3}, want: false, wantErr: errors.New("invalid type []int")},
|
|
||||||
{name: "map type", id: map[string]int{"a": 1}, want: false, wantErr: errors.New("invalid type map[string]int")},
|
|
||||||
{name: "pointer type", id: new(int), want: false, wantErr: errors.New("invalid type *int")},
|
|
||||||
{name: "func type", id: func() {}, want: false, wantErr: errors.New("invalid type func()")},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
req := Request{
|
|
||||||
SessionId: "test-session",
|
|
||||||
JsonRpc: "2.0",
|
|
||||||
ID: tt.id,
|
|
||||||
Method: "testMethod",
|
|
||||||
Params: json.RawMessage(`{}`),
|
|
||||||
}
|
|
||||||
|
|
||||||
got, err := req.isNotification()
|
|
||||||
|
|
||||||
if (err != nil) != (tt.wantErr != nil) {
|
|
||||||
t.Fatalf("error presence mismatch: got error = %v, wantErr %v", err, tt.wantErr)
|
|
||||||
}
|
|
||||||
if err != nil && tt.wantErr != nil && err.Error() != tt.wantErr.Error() {
|
|
||||||
t.Fatalf("error message mismatch:\ngot %q\nwant %q", err.Error(), tt.wantErr.Error())
|
|
||||||
}
|
|
||||||
|
|
||||||
if got != tt.want {
|
|
||||||
t.Errorf("isNotification() = %v, want %v for ID %v (%T)", got, tt.want, tt.id, tt.id)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
107
mcp/util.go
107
mcp/util.go
@@ -1,107 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import "fmt"
|
|
||||||
|
|
||||||
// formatSSEMessage formats a Server-Sent Event message with proper CRLF line endings
|
|
||||||
func formatSSEMessage(event string, data []byte) string {
|
|
||||||
return fmt.Sprintf("event: %s\r\ndata: %s\r\n\r\n", event, string(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
// ptr is a helper function to get a pointer to a value
|
|
||||||
func ptr[T any](v T) *T {
|
|
||||||
return &v
|
|
||||||
}
|
|
||||||
|
|
||||||
func toTypedContents(contents []any) []any {
|
|
||||||
typedContents := make([]any, len(contents))
|
|
||||||
|
|
||||||
for i, content := range contents {
|
|
||||||
switch v := content.(type) {
|
|
||||||
case TextContent:
|
|
||||||
typedContents[i] = typedTextContent{
|
|
||||||
Type: ContentTypeText,
|
|
||||||
TextContent: v,
|
|
||||||
}
|
|
||||||
case ImageContent:
|
|
||||||
typedContents[i] = typedImageContent{
|
|
||||||
Type: ContentTypeImage,
|
|
||||||
ImageContent: v,
|
|
||||||
}
|
|
||||||
case AudioContent:
|
|
||||||
typedContents[i] = typedAudioContent{
|
|
||||||
Type: ContentTypeAudio,
|
|
||||||
AudioContent: v,
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
typedContents[i] = typedTextContent{
|
|
||||||
Type: ContentTypeText,
|
|
||||||
TextContent: TextContent{
|
|
||||||
Text: fmt.Sprintf("Unknown content type: %T", v),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return typedContents
|
|
||||||
}
|
|
||||||
|
|
||||||
func toTypedPromptMessages(messages []PromptMessage) []PromptMessage {
|
|
||||||
typedMessages := make([]PromptMessage, len(messages))
|
|
||||||
|
|
||||||
for i, msg := range messages {
|
|
||||||
switch v := msg.Content.(type) {
|
|
||||||
case TextContent:
|
|
||||||
typedMessages[i] = PromptMessage{
|
|
||||||
Role: msg.Role,
|
|
||||||
Content: typedTextContent{
|
|
||||||
Type: ContentTypeText,
|
|
||||||
TextContent: v,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
case ImageContent:
|
|
||||||
typedMessages[i] = PromptMessage{
|
|
||||||
Role: msg.Role,
|
|
||||||
Content: typedImageContent{
|
|
||||||
Type: ContentTypeImage,
|
|
||||||
ImageContent: v,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
case AudioContent:
|
|
||||||
typedMessages[i] = PromptMessage{
|
|
||||||
Role: msg.Role,
|
|
||||||
Content: typedAudioContent{
|
|
||||||
Type: ContentTypeAudio,
|
|
||||||
AudioContent: v,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
typedMessages[i] = PromptMessage{
|
|
||||||
Role: msg.Role,
|
|
||||||
Content: typedTextContent{
|
|
||||||
Type: ContentTypeText,
|
|
||||||
TextContent: TextContent{
|
|
||||||
Text: fmt.Sprintf("Unknown content type: %T", v),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return typedMessages
|
|
||||||
}
|
|
||||||
|
|
||||||
// validatePromptArguments checks if all required arguments are provided
|
|
||||||
// Returns a list of missing required arguments
|
|
||||||
func validatePromptArguments(prompt Prompt, providedArgs map[string]string) []string {
|
|
||||||
var missingArgs []string
|
|
||||||
|
|
||||||
for _, arg := range prompt.Arguments {
|
|
||||||
if arg.Required {
|
|
||||||
if value, exists := providedArgs[arg.Name]; !exists || len(value) == 0 {
|
|
||||||
missingArgs = append(missingArgs, arg.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return missingArgs
|
|
||||||
}
|
|
||||||
274
mcp/util_test.go
274
mcp/util_test.go
@@ -1,274 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Event struct {
|
|
||||||
Type string
|
|
||||||
Data map[string]any
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseEvent(input string) (*Event, error) {
|
|
||||||
var evt Event
|
|
||||||
var dataStr string
|
|
||||||
|
|
||||||
scanner := bufio.NewScanner(strings.NewReader(input))
|
|
||||||
for scanner.Scan() {
|
|
||||||
line := scanner.Text()
|
|
||||||
if strings.HasPrefix(line, "event:") {
|
|
||||||
evt.Type = strings.TrimSpace(strings.TrimPrefix(line, "event:"))
|
|
||||||
} else if strings.HasPrefix(line, "data:") {
|
|
||||||
dataStr = strings.TrimSpace(strings.TrimPrefix(line, "data:"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err := scanner.Err(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(dataStr) > 0 {
|
|
||||||
if err := json.Unmarshal([]byte(dataStr), &evt.Data); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse data: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &evt, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestToTypedPromptMessages tests the toTypedPromptMessages function
|
|
||||||
func TestToTypedPromptMessages(t *testing.T) {
|
|
||||||
// Test with multiple message types in one test
|
|
||||||
t.Run("MixedContentTypes", func(t *testing.T) {
|
|
||||||
// Create test data with different content types
|
|
||||||
messages := []PromptMessage{
|
|
||||||
{
|
|
||||||
Role: RoleUser,
|
|
||||||
Content: TextContent{
|
|
||||||
Text: "Hello, this is a text message",
|
|
||||||
Annotations: &Annotations{
|
|
||||||
Audience: []RoleType{RoleUser, RoleAssistant},
|
|
||||||
Priority: ptr(0.8),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Role: RoleAssistant,
|
|
||||||
Content: ImageContent{
|
|
||||||
Data: "base64ImageData",
|
|
||||||
MimeType: "image/jpeg",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Role: RoleUser,
|
|
||||||
Content: AudioContent{
|
|
||||||
Data: "base64AudioData",
|
|
||||||
MimeType: "audio/mp3",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Role: "system",
|
|
||||||
Content: "This is a simple string that should be handled as unknown type",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call the function
|
|
||||||
result := toTypedPromptMessages(messages)
|
|
||||||
|
|
||||||
// Validate results
|
|
||||||
require.Len(t, result, 4, "Should return the same number of messages")
|
|
||||||
|
|
||||||
// Validate first message (TextContent)
|
|
||||||
msg := result[0]
|
|
||||||
assert.Equal(t, RoleUser, msg.Role, "Role should be preserved")
|
|
||||||
|
|
||||||
// Type assertion using reflection since Content is an interface
|
|
||||||
typed, ok := msg.Content.(typedTextContent)
|
|
||||||
require.True(t, ok, "Should be typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typed.Type, "Type should be text")
|
|
||||||
assert.Equal(t, "Hello, this is a text message", typed.Text, "Text content should be preserved")
|
|
||||||
require.NotNil(t, typed.Annotations, "Annotations should be preserved")
|
|
||||||
assert.Equal(t, []RoleType{RoleUser, RoleAssistant}, typed.Annotations.Audience, "Audience should be preserved")
|
|
||||||
require.NotNil(t, typed.Annotations.Priority, "Priority should be preserved")
|
|
||||||
assert.Equal(t, 0.8, *typed.Annotations.Priority, "Priority value should be preserved")
|
|
||||||
|
|
||||||
// Validate second message (ImageContent)
|
|
||||||
msg = result[1]
|
|
||||||
assert.Equal(t, RoleAssistant, msg.Role, "Role should be preserved")
|
|
||||||
|
|
||||||
// Type assertion for image content
|
|
||||||
typedImg, ok := msg.Content.(typedImageContent)
|
|
||||||
require.True(t, ok, "Should be typedImageContent")
|
|
||||||
assert.Equal(t, ContentTypeImage, typedImg.Type, "Type should be image")
|
|
||||||
assert.Equal(t, "base64ImageData", typedImg.Data, "Image data should be preserved")
|
|
||||||
assert.Equal(t, "image/jpeg", typedImg.MimeType, "MimeType should be preserved")
|
|
||||||
|
|
||||||
// Validate third message (AudioContent)
|
|
||||||
msg = result[2]
|
|
||||||
assert.Equal(t, RoleUser, msg.Role, "Role should be preserved")
|
|
||||||
|
|
||||||
// Type assertion for audio content
|
|
||||||
typedAudio, ok := msg.Content.(typedAudioContent)
|
|
||||||
require.True(t, ok, "Should be typedAudioContent")
|
|
||||||
assert.Equal(t, ContentTypeAudio, typedAudio.Type, "Type should be audio")
|
|
||||||
assert.Equal(t, "base64AudioData", typedAudio.Data, "Audio data should be preserved")
|
|
||||||
assert.Equal(t, "audio/mp3", typedAudio.MimeType, "MimeType should be preserved")
|
|
||||||
|
|
||||||
// Validate fourth message (unknown type converted to TextContent)
|
|
||||||
msg = result[3]
|
|
||||||
assert.Equal(t, RoleType("system"), msg.Role, "Role should be preserved")
|
|
||||||
|
|
||||||
// Should be converted to a typedTextContent with error message
|
|
||||||
typedUnknown, ok := msg.Content.(typedTextContent)
|
|
||||||
require.True(t, ok, "Unknown content should be converted to typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typedUnknown.Type, "Type should be text")
|
|
||||||
assert.Contains(t, typedUnknown.Text, "Unknown content type:", "Should contain error about unknown type")
|
|
||||||
assert.Contains(t, typedUnknown.Text, "string", "Should mention the actual type")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test empty input
|
|
||||||
t.Run("EmptyInput", func(t *testing.T) {
|
|
||||||
messages := []PromptMessage{}
|
|
||||||
result := toTypedPromptMessages(messages)
|
|
||||||
assert.Empty(t, result, "Should return empty slice for empty input")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test with nil annotations
|
|
||||||
t.Run("NilAnnotations", func(t *testing.T) {
|
|
||||||
messages := []PromptMessage{
|
|
||||||
{
|
|
||||||
Role: RoleUser,
|
|
||||||
Content: TextContent{
|
|
||||||
Text: "Text with nil annotations",
|
|
||||||
Annotations: nil,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
result := toTypedPromptMessages(messages)
|
|
||||||
require.Len(t, result, 1, "Should return one message")
|
|
||||||
|
|
||||||
typed, ok := result[0].Content.(typedTextContent)
|
|
||||||
require.True(t, ok, "Should be typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typed.Type, "Type should be text")
|
|
||||||
assert.Equal(t, "Text with nil annotations", typed.Text, "Text content should be preserved")
|
|
||||||
assert.Nil(t, typed.Annotations, "Nil annotations should be preserved as nil")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestToTypedContents tests the toTypedContents function
|
|
||||||
func TestToTypedContents(t *testing.T) {
|
|
||||||
// Test with multiple content types in one test
|
|
||||||
t.Run("MixedContentTypes", func(t *testing.T) {
|
|
||||||
// Create test data with different content types
|
|
||||||
contents := []any{
|
|
||||||
TextContent{
|
|
||||||
Text: "Hello, this is a text content",
|
|
||||||
Annotations: &Annotations{
|
|
||||||
Audience: []RoleType{RoleUser, RoleAssistant},
|
|
||||||
Priority: ptr(0.7),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
ImageContent{
|
|
||||||
Data: "base64ImageData",
|
|
||||||
MimeType: "image/png",
|
|
||||||
},
|
|
||||||
AudioContent{
|
|
||||||
Data: "base64AudioData",
|
|
||||||
MimeType: "audio/wav",
|
|
||||||
},
|
|
||||||
"This is a simple string that should be handled as unknown type",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Call the function
|
|
||||||
result := toTypedContents(contents)
|
|
||||||
|
|
||||||
// Validate results
|
|
||||||
require.Len(t, result, 4, "Should return the same number of contents")
|
|
||||||
|
|
||||||
// Validate first content (TextContent)
|
|
||||||
typed, ok := result[0].(typedTextContent)
|
|
||||||
require.True(t, ok, "Should be typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typed.Type, "Type should be text")
|
|
||||||
assert.Equal(t, "Hello, this is a text content", typed.Text, "Text content should be preserved")
|
|
||||||
require.NotNil(t, typed.Annotations, "Annotations should be preserved")
|
|
||||||
assert.Equal(t, []RoleType{RoleUser, RoleAssistant}, typed.Annotations.Audience, "Audience should be preserved")
|
|
||||||
require.NotNil(t, typed.Annotations.Priority, "Priority should be preserved")
|
|
||||||
assert.Equal(t, 0.7, *typed.Annotations.Priority, "Priority value should be preserved")
|
|
||||||
|
|
||||||
// Validate second content (ImageContent)
|
|
||||||
typedImg, ok := result[1].(typedImageContent)
|
|
||||||
require.True(t, ok, "Should be typedImageContent")
|
|
||||||
assert.Equal(t, ContentTypeImage, typedImg.Type, "Type should be image")
|
|
||||||
assert.Equal(t, "base64ImageData", typedImg.Data, "Image data should be preserved")
|
|
||||||
assert.Equal(t, "image/png", typedImg.MimeType, "MimeType should be preserved")
|
|
||||||
|
|
||||||
// Validate third content (AudioContent)
|
|
||||||
typedAudio, ok := result[2].(typedAudioContent)
|
|
||||||
require.True(t, ok, "Should be typedAudioContent")
|
|
||||||
assert.Equal(t, ContentTypeAudio, typedAudio.Type, "Type should be audio")
|
|
||||||
assert.Equal(t, "base64AudioData", typedAudio.Data, "Audio data should be preserved")
|
|
||||||
assert.Equal(t, "audio/wav", typedAudio.MimeType, "MimeType should be preserved")
|
|
||||||
|
|
||||||
// Validate fourth content (unknown type converted to TextContent)
|
|
||||||
typedUnknown, ok := result[3].(typedTextContent)
|
|
||||||
require.True(t, ok, "Unknown content should be converted to typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typedUnknown.Type, "Type should be text")
|
|
||||||
assert.Contains(t, typedUnknown.Text, "Unknown content type:", "Should contain error about unknown type")
|
|
||||||
assert.Contains(t, typedUnknown.Text, "string", "Should mention the actual type")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test empty input
|
|
||||||
t.Run("EmptyInput", func(t *testing.T) {
|
|
||||||
contents := []any{}
|
|
||||||
result := toTypedContents(contents)
|
|
||||||
assert.Empty(t, result, "Should return empty slice for empty input")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test with nil annotations
|
|
||||||
t.Run("NilAnnotations", func(t *testing.T) {
|
|
||||||
contents := []any{
|
|
||||||
TextContent{
|
|
||||||
Text: "Text with nil annotations",
|
|
||||||
Annotations: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
result := toTypedContents(contents)
|
|
||||||
require.Len(t, result, 1, "Should return one content")
|
|
||||||
|
|
||||||
typed, ok := result[0].(typedTextContent)
|
|
||||||
require.True(t, ok, "Should be typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typed.Type, "Type should be text")
|
|
||||||
assert.Equal(t, "Text with nil annotations", typed.Text, "Text content should be preserved")
|
|
||||||
assert.Nil(t, typed.Annotations, "Nil annotations should be preserved as nil")
|
|
||||||
})
|
|
||||||
|
|
||||||
// Test with custom struct (should be handled as unknown type)
|
|
||||||
t.Run("CustomStruct", func(t *testing.T) {
|
|
||||||
type CustomContent struct {
|
|
||||||
Data string
|
|
||||||
}
|
|
||||||
|
|
||||||
contents := []any{
|
|
||||||
CustomContent{
|
|
||||||
Data: "custom data",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
result := toTypedContents(contents)
|
|
||||||
require.Len(t, result, 1, "Should return one content")
|
|
||||||
|
|
||||||
typed, ok := result[0].(typedTextContent)
|
|
||||||
require.True(t, ok, "Custom struct should be converted to typedTextContent")
|
|
||||||
assert.Equal(t, ContentTypeText, typed.Type, "Type should be text")
|
|
||||||
assert.Contains(t, typed.Text, "Unknown content type:", "Should contain error about unknown type")
|
|
||||||
assert.Contains(t, typed.Text, "CustomContent", "Should mention the actual type")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
149
mcp/vars.go
149
mcp/vars.go
@@ -1,149 +0,0 @@
|
|||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/syncx"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Protocol constants
|
|
||||||
const (
|
|
||||||
// JSON-RPC version as defined in the specification
|
|
||||||
jsonRpcVersion = "2.0"
|
|
||||||
|
|
||||||
// Session identifier key used in request URLs
|
|
||||||
sessionIdKey = "session_id"
|
|
||||||
|
|
||||||
// progressTokenKey is used to track progress of long-running tasks
|
|
||||||
progressTokenKey = "progressToken"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Server-Sent Events (SSE) event types
|
|
||||||
const (
|
|
||||||
// Standard message event for JSON-RPC responses
|
|
||||||
eventMessage = "message"
|
|
||||||
|
|
||||||
// Endpoint event for sending endpoint URL to clients
|
|
||||||
eventEndpoint = "endpoint"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Content type identifiers
|
|
||||||
const (
|
|
||||||
// ContentTypeObject is object content type
|
|
||||||
ContentTypeObject = "object"
|
|
||||||
|
|
||||||
// ContentTypeText is text content type
|
|
||||||
ContentTypeText = "text"
|
|
||||||
|
|
||||||
// ContentTypeImage is image content type
|
|
||||||
ContentTypeImage = "image"
|
|
||||||
|
|
||||||
// ContentTypeAudio is audio content type
|
|
||||||
ContentTypeAudio = "audio"
|
|
||||||
|
|
||||||
// ContentTypeResource is resource content type
|
|
||||||
ContentTypeResource = "resource"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Collection keys for broadcast events
|
|
||||||
const (
|
|
||||||
// Key for prompts collection
|
|
||||||
keyPrompts = "prompts"
|
|
||||||
|
|
||||||
// Key for resources collection
|
|
||||||
keyResources = "resources"
|
|
||||||
|
|
||||||
// Key for tools collection
|
|
||||||
keyTools = "tools"
|
|
||||||
)
|
|
||||||
|
|
||||||
// JSON-RPC error codes
|
|
||||||
// Standard error codes from JSON-RPC 2.0 spec
|
|
||||||
const (
|
|
||||||
// Invalid JSON was received by the server
|
|
||||||
errCodeInvalidRequest = -32600
|
|
||||||
|
|
||||||
// The method does not exist / is not available
|
|
||||||
errCodeMethodNotFound = -32601
|
|
||||||
|
|
||||||
// Invalid method parameter(s)
|
|
||||||
errCodeInvalidParams = -32602
|
|
||||||
|
|
||||||
// Internal JSON-RPC error
|
|
||||||
errCodeInternalError = -32603
|
|
||||||
|
|
||||||
// Tool execution timed out
|
|
||||||
errCodeTimeout = -32001
|
|
||||||
|
|
||||||
// Resource not found error
|
|
||||||
errCodeResourceNotFound = -32002
|
|
||||||
|
|
||||||
// Client hasn't completed initialization
|
|
||||||
errCodeClientNotInitialized = -32800
|
|
||||||
)
|
|
||||||
|
|
||||||
// User and assistant role definitions
|
|
||||||
const (
|
|
||||||
// RoleUser is the "user" role - the entity asking questions
|
|
||||||
RoleUser RoleType = "user"
|
|
||||||
|
|
||||||
// RoleAssistant is the "assistant" role - the entity providing responses
|
|
||||||
RoleAssistant RoleType = "assistant"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Method names as defined in the MCP specification
|
|
||||||
const (
|
|
||||||
// Initialize the connection between client and server
|
|
||||||
methodInitialize = "initialize"
|
|
||||||
|
|
||||||
// List available tools
|
|
||||||
methodToolsList = "tools/list"
|
|
||||||
|
|
||||||
// Call a specific tool
|
|
||||||
methodToolsCall = "tools/call"
|
|
||||||
|
|
||||||
// List available prompts
|
|
||||||
methodPromptsList = "prompts/list"
|
|
||||||
|
|
||||||
// Get a specific prompt
|
|
||||||
methodPromptsGet = "prompts/get"
|
|
||||||
|
|
||||||
// List available resources
|
|
||||||
methodResourcesList = "resources/list"
|
|
||||||
|
|
||||||
// Read a specific resource
|
|
||||||
methodResourcesRead = "resources/read"
|
|
||||||
|
|
||||||
// Subscribe to resource updates
|
|
||||||
methodResourcesSubscribe = "resources/subscribe"
|
|
||||||
|
|
||||||
// Simple ping to check server availability
|
|
||||||
methodPing = "ping"
|
|
||||||
|
|
||||||
// Notification that client is fully initialized
|
|
||||||
methodNotificationsInitialized = "notifications/initialized"
|
|
||||||
|
|
||||||
// Notification that a request was canceled
|
|
||||||
methodNotificationsCancelled = "notifications/cancelled"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Event names for Server-Sent Events (SSE)
|
|
||||||
const (
|
|
||||||
// Notification of tool list changes
|
|
||||||
eventToolsListChanged = "tools/list_changed"
|
|
||||||
|
|
||||||
// Notification of prompt list changes
|
|
||||||
eventPromptsListChanged = "prompts/list_changed"
|
|
||||||
|
|
||||||
// Notification of resource list changes
|
|
||||||
eventResourcesListChanged = "resources/list_changed"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// Default channel size for events
|
|
||||||
eventChanSize = 10
|
|
||||||
|
|
||||||
// Default ping interval for checking connection availability
|
|
||||||
// use syncx.ForAtomicDuration to ensure atomicity in test race
|
|
||||||
pingInterval = syncx.ForAtomicDuration(30 * time.Second)
|
|
||||||
)
|
|
||||||
210
mcp/vars_test.go
210
mcp/vars_test.go
@@ -1,210 +0,0 @@
|
|||||||
// filepath: /Users/kevin/Develop/go/opensource/go-zero/mcp/vars_test.go
|
|
||||||
package mcp
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"net/http/httptest"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TestErrorCodes ensures error codes are applied correctly in error responses
|
|
||||||
func TestErrorCodes(t *testing.T) {
|
|
||||||
testCases := []struct {
|
|
||||||
name string
|
|
||||||
code int
|
|
||||||
message string
|
|
||||||
expected string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "invalid request error",
|
|
||||||
code: errCodeInvalidRequest,
|
|
||||||
message: "Invalid request",
|
|
||||||
expected: `"code":-32600`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "method not found error",
|
|
||||||
code: errCodeMethodNotFound,
|
|
||||||
message: "Method not found",
|
|
||||||
expected: `"code":-32601`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "invalid params error",
|
|
||||||
code: errCodeInvalidParams,
|
|
||||||
message: "Invalid parameters",
|
|
||||||
expected: `"code":-32602`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "internal error",
|
|
||||||
code: errCodeInternalError,
|
|
||||||
message: "Internal server error",
|
|
||||||
expected: `"code":-32603`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "timeout error",
|
|
||||||
code: errCodeTimeout,
|
|
||||||
message: "Operation timed out",
|
|
||||||
expected: `"code":-32001`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "resource not found error",
|
|
||||||
code: errCodeResourceNotFound,
|
|
||||||
message: "Resource not found",
|
|
||||||
expected: `"code":-32002`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "client not initialized error",
|
|
||||||
code: errCodeClientNotInitialized,
|
|
||||||
message: "Client not initialized",
|
|
||||||
expected: `"code":-32800`,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.name, func(t *testing.T) {
|
|
||||||
resp := Response{
|
|
||||||
JsonRpc: jsonRpcVersion,
|
|
||||||
ID: int64(1),
|
|
||||||
Error: &errorObj{
|
|
||||||
Code: tc.code,
|
|
||||||
Message: tc.message,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
data, err := json.Marshal(resp)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), tc.expected, "Error code should match expected value")
|
|
||||||
assert.Contains(t, string(data), tc.message, "Error message should be included")
|
|
||||||
assert.Contains(t, string(data), jsonRpcVersion, "JSON-RPC version should be included")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestJsonRpcVersion ensures the correct JSON-RPC version is used
|
|
||||||
func TestJsonRpcVersion(t *testing.T) {
|
|
||||||
assert.Equal(t, "2.0", jsonRpcVersion, "JSON-RPC version should be 2.0")
|
|
||||||
|
|
||||||
// Test that it's used in responses
|
|
||||||
resp := Response{
|
|
||||||
JsonRpc: jsonRpcVersion,
|
|
||||||
ID: int64(1),
|
|
||||||
Result: "test",
|
|
||||||
}
|
|
||||||
data, err := json.Marshal(resp)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"jsonrpc":"2.0"`, "Response should use correct JSON-RPC version")
|
|
||||||
|
|
||||||
// Test that it's expected in requests
|
|
||||||
reqStr := `{"jsonrpc":"2.0","id":1,"method":"test"}`
|
|
||||||
var req Request
|
|
||||||
err = json.Unmarshal([]byte(reqStr), &req)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, jsonRpcVersion, req.JsonRpc, "Request should parse correct JSON-RPC version")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestSessionIdKey ensures session ID extraction works correctly
|
|
||||||
func TestSessionIdKey(t *testing.T) {
|
|
||||||
// Create a mock server implementation
|
|
||||||
mock := newMockMcpServer(t)
|
|
||||||
defer mock.shutdown()
|
|
||||||
|
|
||||||
// Verify the key constant
|
|
||||||
assert.Equal(t, "session_id", sessionIdKey, "Session ID key should be 'session_id'")
|
|
||||||
|
|
||||||
// Test that session ID is extracted correctly
|
|
||||||
mockR := httptest.NewRequest("GET", "/?"+sessionIdKey+"=test-session", nil)
|
|
||||||
|
|
||||||
// Since the mock server is using the same session key logic,
|
|
||||||
// we can test this by accessing the request query parameters directly
|
|
||||||
sessionID := mockR.URL.Query().Get(sessionIdKey)
|
|
||||||
assert.Equal(t, "test-session", sessionID, "Session ID should be extracted correctly")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestEventTypes ensures event types are set correctly in SSE responses
|
|
||||||
func TestEventTypes(t *testing.T) {
|
|
||||||
// Test message event
|
|
||||||
assert.Equal(t, "message", eventMessage, "Message event should be 'message'")
|
|
||||||
|
|
||||||
// Test endpoint event
|
|
||||||
assert.Equal(t, "endpoint", eventEndpoint, "Endpoint event should be 'endpoint'")
|
|
||||||
|
|
||||||
// Verify them in an actual SSE format string
|
|
||||||
messageEvent := "event: " + eventMessage + "\ndata: test\n\n"
|
|
||||||
assert.Contains(t, messageEvent, "event: message", "Message event should format correctly")
|
|
||||||
|
|
||||||
endpointEvent := "event: " + eventEndpoint + "\ndata: test\n\n"
|
|
||||||
assert.Contains(t, endpointEvent, "event: endpoint", "Endpoint event should format correctly")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestCollectionKeys checks that collection keys are used correctly
|
|
||||||
func TestCollectionKeys(t *testing.T) {
|
|
||||||
// Verify collection key constants
|
|
||||||
assert.Equal(t, "prompts", keyPrompts, "Prompts key should be 'prompts'")
|
|
||||||
assert.Equal(t, "resources", keyResources, "Resources key should be 'resources'")
|
|
||||||
assert.Equal(t, "tools", keyTools, "Tools key should be 'tools'")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestRoleTypes checks that role types are used correctly
|
|
||||||
func TestRoleTypes(t *testing.T) {
|
|
||||||
// Test in annotations
|
|
||||||
annotations := Annotations{
|
|
||||||
Audience: []RoleType{RoleUser, RoleAssistant},
|
|
||||||
}
|
|
||||||
data, err := json.Marshal(annotations)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"audience":["user","assistant"]`, "Role types should marshal correctly")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestMethodNames checks that method names are used correctly
|
|
||||||
func TestMethodNames(t *testing.T) {
|
|
||||||
// Verify method name constants
|
|
||||||
methods := map[string]string{
|
|
||||||
"initialize": methodInitialize,
|
|
||||||
"tools/list": methodToolsList,
|
|
||||||
"tools/call": methodToolsCall,
|
|
||||||
"prompts/list": methodPromptsList,
|
|
||||||
"prompts/get": methodPromptsGet,
|
|
||||||
"resources/list": methodResourcesList,
|
|
||||||
"resources/read": methodResourcesRead,
|
|
||||||
"resources/subscribe": methodResourcesSubscribe,
|
|
||||||
"ping": methodPing,
|
|
||||||
"notifications/initialized": methodNotificationsInitialized,
|
|
||||||
"notifications/cancelled": methodNotificationsCancelled,
|
|
||||||
}
|
|
||||||
|
|
||||||
for expected, actual := range methods {
|
|
||||||
assert.Equal(t, expected, actual, "Method name should be "+expected)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test in a request
|
|
||||||
for methodName := range methods {
|
|
||||||
req := Request{
|
|
||||||
JsonRpc: jsonRpcVersion,
|
|
||||||
ID: int64(1),
|
|
||||||
Method: methodName,
|
|
||||||
}
|
|
||||||
data, err := json.Marshal(req)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Contains(t, string(data), `"method":"`+methodName+`"`, "Method name should be used in requests")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestEventNames checks that event names are used correctly
|
|
||||||
func TestEventNames(t *testing.T) {
|
|
||||||
// Verify event name constants
|
|
||||||
events := map[string]string{
|
|
||||||
"tools/list_changed": eventToolsListChanged,
|
|
||||||
"prompts/list_changed": eventPromptsListChanged,
|
|
||||||
"resources/list_changed": eventResourcesListChanged,
|
|
||||||
}
|
|
||||||
|
|
||||||
for expected, actual := range events {
|
|
||||||
assert.Equal(t, expected, actual, "Event name should be "+expected)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test event names in SSE format
|
|
||||||
for _, eventName := range events {
|
|
||||||
sseEvent := "event: " + eventName + "\ndata: test\n\n"
|
|
||||||
assert.Contains(t, sseEvent, "event: "+eventName, "Event name should format correctly in SSE")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
167
readme-cn.md
167
readme-cn.md
@@ -17,7 +17,7 @@
|
|||||||
<a href="https://trendshift.io/repositories/3263" target="_blank"><img src="https://trendshift.io/api/badge/repositories/3263" alt="zeromicro%2Fgo-zero | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
<a href="https://trendshift.io/repositories/3263" target="_blank"><img src="https://trendshift.io/api/badge/repositories/3263" alt="zeromicro%2Fgo-zero | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||||
<a href="https://www.producthunt.com/posts/go-zero?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-go-zero" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=334030&theme=light" alt="go-zero - A web & rpc framework written in Go. | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
|
<a href="https://www.producthunt.com/posts/go-zero?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-go-zero" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=334030&theme=light" alt="go-zero - A web & rpc framework written in Go. | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
|
||||||
|
|
||||||
## 0. go-zero 介绍
|
## go-zero 介绍
|
||||||
|
|
||||||
go-zero(收录于 CNCF 云原生技术全景图:[https://landscape.cncf.io/?selected=go-zero](https://landscape.cncf.io/?selected=go-zero))是一个集成了各种工程实践的 web 和 rpc 框架。通过弹性设计保障了大并发服务端的稳定性,经受了充分的实战检验。
|
go-zero(收录于 CNCF 云原生技术全景图:[https://landscape.cncf.io/?selected=go-zero](https://landscape.cncf.io/?selected=go-zero))是一个集成了各种工程实践的 web 和 rpc 框架。通过弹性设计保障了大并发服务端的稳定性,经受了充分的实战检验。
|
||||||
|
|
||||||
@@ -25,72 +25,50 @@ go-zero 包含极简的 API 定义和生成工具 goctl,可以根据定义的
|
|||||||
|
|
||||||
使用 go-zero 的好处:
|
使用 go-zero 的好处:
|
||||||
|
|
||||||
* 轻松获得支撑千万日活服务的稳定性
|
* 经过千万日活服务验证的稳定性
|
||||||
* 内建级联超时控制、限流、自适应熔断、自适应降载等微服务治理能力,无需配置和额外代码
|
* 内建弹性保护:级联超时、限流、熔断、降载(无需配置)
|
||||||
* 微服务治理中间件可无缝集成到其它现有框架使用
|
* 极简 API 语法生成多端代码
|
||||||
* 极简的 API 描述,一键生成各端代码
|
* 自动参数校验和丰富的微服务工具包
|
||||||
* 自动校验客户端请求参数合法性
|
|
||||||
* 大量微服务治理和并发工具包
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## 1. go-zero 框架背景
|
## go-zero 框架背景
|
||||||
|
|
||||||
18 年初,我们决定从 `Java+MongoDB` 的单体架构迁移到微服务架构,经过仔细思考和对比,我们决定:
|
18 年初,我们决定从 `Java+MongoDB` 的单体架构迁移到微服务架构,选择:
|
||||||
|
|
||||||
* 基于 Go 语言
|
* **基于 Go 语言** - 高效性能、简洁语法、极致部署体验、极低资源成本
|
||||||
* 高效的性能
|
* **自研微服务框架** - 更快速的问题定位、更便捷的新特性增加
|
||||||
* 简洁的语法
|
|
||||||
* 广泛验证的工程效率
|
|
||||||
* 极致的部署体验
|
|
||||||
* 极低的服务端资源成本
|
|
||||||
* 自研微服务框架
|
|
||||||
* 有过很多微服务框架自研经验
|
|
||||||
* 需要有更快速的问题定位能力
|
|
||||||
* 更便捷的增加新特性
|
|
||||||
|
|
||||||
## 2. go-zero 框架设计思考
|
## go-zero 框架设计思考
|
||||||
|
|
||||||
对于微服务框架的设计,我们期望保障微服务稳定性的同时,也要特别注重研发效率。所以设计之初,我们就有如下一些准则:
|
go-zero 遵循以下核心设计准则:
|
||||||
|
|
||||||
* 保持简单,第一原则
|
* **保持简单** - 简单是第一原则
|
||||||
* 弹性设计,面向故障编程
|
* **高可用** - 高并发、易扩展
|
||||||
* 工具大于约定和文档
|
* **弹性设计** - 面向故障编程
|
||||||
* 高可用、高并发、易扩展
|
* **工具驱动** - 工具大于约定和文档
|
||||||
* 对业务开发友好,封装复杂度
|
* **业务友好** - 封装复杂度、一事一法
|
||||||
* 约束做一件事只有一种方式
|
|
||||||
|
|
||||||
我们经历不到半年时间,彻底完成了从 `Java+MongoDB` 到 `Golang+MySQL` 为主的微服务体系迁移,并于 18 年 8 月底完全上线,稳定保障了业务后续迅速增长,确保了整个服务的高可用。
|
## go-zero 项目实现和特点
|
||||||
|
|
||||||
## 3. go-zero 项目实现和特点
|
go-zero 集成各种工程实践,主要特点:
|
||||||
|
|
||||||
go-zero 是一个集成了各种工程实践的包含 web 和 rpc 框架,有如下主要特点:
|
* **强大工具支持** - 尽可能少的代码编写
|
||||||
|
* **极简接口** - 完全兼容 net/http
|
||||||
* 强大的工具支持,尽可能少的代码编写
|
* **高性能** - 优化的速度和效率
|
||||||
* 极简的接口
|
* **弹性设计** - 内建限流、熔断、降载,自动触发、自动恢复
|
||||||
* 完全兼容 net/http
|
* **服务治理** - 内建服务发现、负载均衡、链路跟踪
|
||||||
* 支持中间件,方便扩展
|
* **开发工具** - API 参数自动校验、超时级联控制、自动缓存控制
|
||||||
* 高性能
|
|
||||||
* 面向故障编程,弹性设计
|
|
||||||
* 内建服务发现、负载均衡
|
|
||||||
* 内建限流、熔断、降载,且自动触发,自动恢复
|
|
||||||
* API 参数自动校验
|
|
||||||
* 超时级联控制
|
|
||||||
* 自动缓存控制
|
|
||||||
* 链路跟踪、统计报警等
|
|
||||||
* 高并发支撑,稳定保障了疫情期间每天的流量洪峰
|
|
||||||
|
|
||||||
如下图,我们从多个层面保障了整体服务的高可用:
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## 4. 我们使用 go-zero 的基本架构图
|
## 我们使用 go-zero 的基本架构图
|
||||||
|
|
||||||
<img width="1067" alt="image" src="https://user-images.githubusercontent.com/1918356/171880582-11a86658-41c3-466c-95e7-7b1220eecc52.png">
|
<img width="1067" alt="image" src="https://user-images.githubusercontent.com/1918356/171880582-11a86658-41c3-466c-95e7-7b1220eecc52.png">
|
||||||
|
|
||||||
觉得不错的话,别忘 **star** 👏
|
觉得不错的话,别忘 **star** 👏
|
||||||
|
|
||||||
## 5. Installation
|
## Installation
|
||||||
|
|
||||||
在项目目录下通过如下命令安装:
|
在项目目录下通过如下命令安装:
|
||||||
|
|
||||||
@@ -98,7 +76,57 @@ go-zero 是一个集成了各种工程实践的包含 web 和 rpc 框架,有
|
|||||||
GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro/go-zero
|
GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro/go-zero
|
||||||
```
|
```
|
||||||
|
|
||||||
## 6. Quick Start
|
## AI 原生开发
|
||||||
|
|
||||||
|
go-zero 团队构建了完整的 AI 工具生态,让 Claude、GitHub Copilot、Cursor 生成符合 go-zero 规范的代码。
|
||||||
|
|
||||||
|
### 三大核心项目
|
||||||
|
|
||||||
|
**[ai-context](https://github.com/zeromicro/ai-context)** - AI 的工作流程指南
|
||||||
|
|
||||||
|
**[zero-skills](https://github.com/zeromicro/zero-skills)** - 模式库和示例
|
||||||
|
|
||||||
|
**[mcp-zero](https://github.com/zeromicro/mcp-zero)** - 基于 MCP 的代码生成工具
|
||||||
|
|
||||||
|
### 快速配置
|
||||||
|
|
||||||
|
#### GitHub Copilot
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .github/ai-context
|
||||||
|
ln -s ai-context/00-instructions.md .github/copilot-instructions.md # macOS/Linux
|
||||||
|
# Windows: mklink .github\copilot-instructions.md .github\ai-context\00-instructions.md
|
||||||
|
git submodule update --remote .github/ai-context # 更新
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Cursor
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .cursorrules
|
||||||
|
git submodule update --remote .cursorrules # 更新
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Windsurf
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .windsurfrules
|
||||||
|
git submodule update --remote .windsurfrules # 更新
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Claude Desktop
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/zeromicro/mcp-zero.git && cd mcp-zero && go build
|
||||||
|
# 配置: ~/Library/Application Support/Claude/claude_desktop_config.json
|
||||||
|
# 或: claude mcp add --transport stdio mcp-zero --env GOCTL_PATH=/path/to/goctl -- /path/to/mcp-zero
|
||||||
|
```
|
||||||
|
|
||||||
|
### 协同工作原理
|
||||||
|
|
||||||
|
AI 助手通过三个工具协同配合:
|
||||||
|
1. **ai-context** - 工作流程指导
|
||||||
|
2. **zero-skills** - 实现模式
|
||||||
|
3. **mcp-zero** - 实时代码生成
|
||||||
|
|
||||||
|
**示例**:创建新的 REST API → AI 读取 **ai-context** 了解工作流 → 调用 **mcp-zero** 生成代码 → 参考 **zero-skills** 实现模式 → 生成符合规范的代码 ✅
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
0. 完整示例请查看
|
0. 完整示例请查看
|
||||||
|
|
||||||
@@ -108,23 +136,22 @@ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro
|
|||||||
|
|
||||||
1. 安装 goctl 工具
|
1. 安装 goctl 工具
|
||||||
|
|
||||||
`goctl` 读作 `go control`,不要读成 `go C-T-L`。`goctl` 的意思是不要被代码控制,而是要去控制它。其中的 `go` 不是指 `golang`。在设计 `goctl` 之初,我就希望通过 `工具` 来解放我们的双手👈
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# Go
|
# Go
|
||||||
GOPROXY=https://goproxy.cn/,direct go install github.com/zeromicro/go-zero/tools/goctl@latest
|
GOPROXY=https://goproxy.cn/,direct go install github.com/zeromicro/go-zero/tools/goctl@latest
|
||||||
|
|
||||||
# For Mac
|
# For Mac
|
||||||
brew install goctl
|
brew install goctl
|
||||||
|
|
||||||
# docker for all platforms
|
# docker for all platforms
|
||||||
docker pull kevinwan/goctl
|
docker pull kevinwan/goctl
|
||||||
# run goctl
|
# run goctl
|
||||||
docker run --rm -it -v `pwd`:/app kevinwan/goctl --help
|
docker run --rm -it -v `pwd`:/app kevinwan/goctl --help
|
||||||
```
|
```
|
||||||
|
|
||||||
确保 goctl 可执行,并且在 $PATH 环境变量里。
|
确保 goctl 可执行并在 $PATH 环境变量里。
|
||||||
|
|
||||||
2. 快速生成 api 服务
|
2. 快速生成 api 服务
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
@@ -157,7 +184,7 @@ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro
|
|||||||
* 可以在 `servicecontext.go` 里面传递依赖给 logic,比如 mysql, redis 等
|
* 可以在 `servicecontext.go` 里面传递依赖给 logic,比如 mysql, redis 等
|
||||||
* 在 api 定义的 `get/post/put/delete` 等请求对应的 logic 里增加业务处理逻辑
|
* 在 api 定义的 `get/post/put/delete` 等请求对应的 logic 里增加业务处理逻辑
|
||||||
|
|
||||||
3. 可以根据 api 文件生成前端需要的 Java, TypeScript, Dart, JavaScript 代码
|
3. 生成多语言客户端代码
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
goctl api java -api greet.api -dir greet
|
goctl api java -api greet.api -dir greet
|
||||||
@@ -165,17 +192,17 @@ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro
|
|||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
## 7. Benchmark
|
## Benchmark
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
[测试代码见这里](https://github.com/smallnest/go-web-framework-benchmark)
|
[测试代码见这里](https://github.com/smallnest/go-web-framework-benchmark)
|
||||||
|
|
||||||
## 8. 文档
|
## 文档
|
||||||
|
|
||||||
* API 文档
|
* API 文档
|
||||||
|
|
||||||
[https://go-zero.dev/cn/](https://go-zero.dev/cn/)
|
[https://go-zero.dev](https://go-zero.dev)
|
||||||
|
|
||||||
* awesome 系列(更多文章见『微服务实践』公众号)
|
* awesome 系列(更多文章见『微服务实践』公众号)
|
||||||
|
|
||||||
@@ -192,9 +219,9 @@ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/zeromicro
|
|||||||
| [goctl-android](https://github.com/zeromicro/goctl-android) | 生成 `java (android)` 端 `http client` 请求代码 |
|
| [goctl-android](https://github.com/zeromicro/goctl-android) | 生成 `java (android)` 端 `http client` 请求代码 |
|
||||||
| [goctl-go-compact](https://github.com/zeromicro/goctl-go-compact) | 合并 `api` 里同一个 `group` 里的 `handler` 到一个 `go` 文件 |
|
| [goctl-go-compact](https://github.com/zeromicro/goctl-go-compact) | 合并 `api` 里同一个 `group` 里的 `handler` 到一个 `go` 文件 |
|
||||||
|
|
||||||
## 9. go-zero 用户
|
## go-zero 用户
|
||||||
|
|
||||||
go-zero 已被许多公司用于生产部署,接入场景如在线教育、电商业务、游戏、区块链等,目前为止,已使用 go-zero 的公司包括但不限于:
|
go-zero 已被众多公司用于生产部署,场景涵盖在线教育、电商、游戏、区块链等。目前使用 go-zero 的公司包括但不限于:
|
||||||
|
|
||||||
>1. 好未来
|
>1. 好未来
|
||||||
>2. 上海晓信信息科技有限公司(晓黑板)
|
>2. 上海晓信信息科技有限公司(晓黑板)
|
||||||
@@ -304,10 +331,14 @@ go-zero 已被许多公司用于生产部署,接入场景如在线教育、电
|
|||||||
>106. 无锡盛算信息技术有限公司
|
>106. 无锡盛算信息技术有限公司
|
||||||
>107. 深圳市聚货通信息科技有限公司
|
>107. 深圳市聚货通信息科技有限公司
|
||||||
>108. 浙江银盾云科技有限公司
|
>108. 浙江银盾云科技有限公司
|
||||||
|
>109. 南京造世网络科技有限公司
|
||||||
|
>110. 温州飞儿云信息技术有限公司
|
||||||
|
>111. 统信软件
|
||||||
|
>112. 深圳坐标软件集团有限公司
|
||||||
|
|
||||||
如果贵公司也已使用 go-zero,欢迎在 [登记地址](https://github.com/zeromicro/go-zero/issues/602) 登记,仅仅为了推广,不做其它用途。
|
如果贵公司也已使用 go-zero,欢迎在 [登记地址](https://github.com/zeromicro/go-zero/issues/602) 登记,仅仅为了推广,不做其它用途。
|
||||||
|
|
||||||
## 10. CNCF 云原生技术全景图
|
## CNCF 云原生技术全景图
|
||||||
|
|
||||||
<p float="left">
|
<p float="left">
|
||||||
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/cncf-logo.svg" width="200"/>
|
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/cncf-logo.svg" width="200"/>
|
||||||
@@ -316,13 +347,13 @@ go-zero 已被许多公司用于生产部署,接入场景如在线教育、电
|
|||||||
|
|
||||||
go-zero 收录在 [CNCF Cloud Native 云原生技术全景图](https://landscape.cncf.io/?selected=go-zero)。
|
go-zero 收录在 [CNCF Cloud Native 云原生技术全景图](https://landscape.cncf.io/?selected=go-zero)。
|
||||||
|
|
||||||
## 11. 微信公众号
|
## 微信公众号
|
||||||
|
|
||||||
`go-zero` 相关文章和视频都会在 `微服务实践` 公众号整理呈现,欢迎扫码关注 👏
|
`go-zero` 相关文章和视频都会在 `微服务实践` 公众号整理呈现,欢迎扫码关注 👏
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/zeromicro.jpg" alt="wechat" width="600" />
|
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/zeromicro.jpg" alt="wechat" width="600" />
|
||||||
|
|
||||||
## 12. 微信交流群
|
## 微信交流群
|
||||||
|
|
||||||
如果文档中未能覆盖的任何疑问,欢迎您在群里提出,我们会尽快答复。
|
如果文档中未能覆盖的任何疑问,欢迎您在群里提出,我们会尽快答复。
|
||||||
|
|
||||||
@@ -332,10 +363,4 @@ go-zero 收录在 [CNCF Cloud Native 云原生技术全景图](https://landscape
|
|||||||
|
|
||||||
加群之前有劳点一下 ***star***,一个小小的 ***star*** 是作者们回答海量问题的动力!🤝
|
加群之前有劳点一下 ***star***,一个小小的 ***star*** 是作者们回答海量问题的动力!🤝
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/wechat.jpg" alt="wechat" width="300" />
|
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/wechat.jpg" alt="wechat" width="300" />
|
||||||
|
|
||||||
## 13. 知识星球
|
|
||||||
|
|
||||||
官方团队运营的知识星球
|
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/zsxq.jpg" alt="知识星球" width="300" />
|
|
||||||
152
readme.md
152
readme.md
@@ -42,61 +42,39 @@ go-zero contains simple API description syntax and code generation tool called `
|
|||||||
|
|
||||||
## Backgrounds of go-zero
|
## Backgrounds of go-zero
|
||||||
|
|
||||||
In early 2018, we embarked on a transformative journey to redesign our system, transitioning from a monolithic architecture built with Java and MongoDB to a microservices architecture. After careful research and comparison, we made a deliberate choice to:
|
In early 2018, we transitioned from a Java+MongoDB monolithic architecture to microservices, choosing:
|
||||||
|
|
||||||
* Go Beyond with Golang
|
* **Golang** - High performance, simple syntax, excellent deployment experience, and low resource consumption
|
||||||
* Great performance
|
* **Self-designed microservice framework** - Better problem isolation, easier feature extension, and faster issue resolution
|
||||||
* Simple syntax
|
|
||||||
* Proven engineering efficiency
|
|
||||||
* Extreme deployment experience
|
|
||||||
* Less server resource consumption
|
|
||||||
|
|
||||||
* Self-Design Our Microservice Architecture
|
|
||||||
* Microservice architecture facilitates the creation of scalable, flexible, and maintainable software systems with independent, reusable components.
|
|
||||||
* Easy to locate the problems within microservices.
|
|
||||||
* Easy to extend the features by adding or modifying specific microservices without impacting the entire system.
|
|
||||||
|
|
||||||
## Design considerations on go-zero
|
## Design considerations on go-zero
|
||||||
|
|
||||||
By designing the microservice architecture, we expected to ensure stability, as well as productivity. And from just the beginning, we have the following design principles:
|
go-zero follows these core design principles:
|
||||||
|
|
||||||
* Keep it simple
|
* **Simplicity** - Keep it simple, first principle
|
||||||
* High availability
|
* **High availability** - Stable under high concurrency
|
||||||
* Stable on high concurrency
|
* **Resilience** - Failure-oriented programming with adaptive protection
|
||||||
* Easy to extend
|
* **Developer friendly** - Encapsulate complexity, one way to do one thing
|
||||||
* Resilience design, failure-oriented programming
|
* **Easy to extend** - Flexible architecture for growth
|
||||||
* Try best to be friendly to the business logic development, encapsulate the complexity
|
|
||||||
* One thing, one way
|
|
||||||
|
|
||||||
After almost half a year, we finished the transfer from a monolithic system to microservice system and deployed on August 2018. The new system guaranteed business growth and system stability.
|
|
||||||
|
|
||||||
## The implementation and features of go-zero
|
## The implementation and features of go-zero
|
||||||
|
|
||||||
go-zero is a web and rpc framework that integrates lots of engineering practices. The features are mainly listed below:
|
go-zero integrates engineering best practices:
|
||||||
|
|
||||||
* Powerful tool included, less code to write
|
* **Code generation** - Powerful tools to minimize boilerplate
|
||||||
* Simple interfaces
|
* **Simple API** - Clean interfaces, fully compatible with net/http
|
||||||
* Fully compatible with net/http
|
* **High performance** - Optimized for speed and efficiency
|
||||||
* Middlewares are supported, easy to extend
|
* **Resilience** - Built-in circuit breaker, rate limiting, load shedding, timeout control
|
||||||
* High performance
|
* **Service mesh** - Service discovery, load balancing, call tracing
|
||||||
* Failure-oriented programming, resilience design
|
* **Developer tools** - Auto parameter validation, cache management, metrics and monitoring
|
||||||
* Builtin service discovery, load balancing
|
|
||||||
* Builtin concurrency control, adaptive circuit breaker, adaptive load shedding, auto-trigger, auto recover
|
|
||||||
* Auto validation of API request parameters
|
|
||||||
* Chained timeout control
|
|
||||||
* Auto management of data caching
|
|
||||||
* Call tracing, metrics, and monitoring
|
|
||||||
* High concurrency protected
|
|
||||||
|
|
||||||
As below, go-zero protects the system with a couple of layers and mechanisms:
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## The simplified architecture that we use with go-zero
|
## Architecture with go-zero
|
||||||
|
|
||||||
<img width="1067" alt="image" src="https://user-images.githubusercontent.com/1918356/171880372-5010d846-e8b1-4942-8fe2-e2bbb584f762.png">
|
<img width="1067" alt="image" src="https://user-images.githubusercontent.com/1918356/171880372-5010d846-e8b1-4942-8fe2-e2bbb584f762.png">
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
Run the following command under your project:
|
Run the following command under your project:
|
||||||
|
|
||||||
@@ -104,9 +82,59 @@ Run the following command under your project:
|
|||||||
go get -u github.com/zeromicro/go-zero
|
go get -u github.com/zeromicro/go-zero
|
||||||
```
|
```
|
||||||
|
|
||||||
## Quick Start
|
## AI-Native Development
|
||||||
|
|
||||||
1. Full examples can be checked out from below:
|
The go-zero team provides AI tooling for Claude, GitHub Copilot, Cursor to generate framework-compliant code.
|
||||||
|
|
||||||
|
### Three Core Projects
|
||||||
|
|
||||||
|
**[ai-context](https://github.com/zeromicro/ai-context)** - Workflow guide for AI assistants
|
||||||
|
|
||||||
|
**[zero-skills](https://github.com/zeromicro/zero-skills)** - Pattern library with examples
|
||||||
|
|
||||||
|
**[mcp-zero](https://github.com/zeromicro/mcp-zero)** - Code generation tools via Model Context Protocol
|
||||||
|
|
||||||
|
### Quick Setup
|
||||||
|
|
||||||
|
#### GitHub Copilot
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .github/ai-context
|
||||||
|
ln -s ai-context/00-instructions.md .github/copilot-instructions.md # macOS/Linux
|
||||||
|
# Windows: mklink .github\copilot-instructions.md .github\ai-context\00-instructions.md
|
||||||
|
git submodule update --remote .github/ai-context # Update
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Cursor
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .cursorrules
|
||||||
|
git submodule update --remote .cursorrules # Update
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Windsurf
|
||||||
|
```bash
|
||||||
|
git submodule add https://github.com/zeromicro/ai-context.git .windsurfrules
|
||||||
|
git submodule update --remote .windsurfrules # Update
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Claude Desktop
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/zeromicro/mcp-zero.git && cd mcp-zero && go build
|
||||||
|
# Configure: ~/Library/Application Support/Claude/claude_desktop_config.json
|
||||||
|
# Or: claude mcp add --transport stdio mcp-zero --env GOCTL_PATH=/path/to/goctl -- /path/to/mcp-zero
|
||||||
|
```
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
AI assistants use these tools together:
|
||||||
|
1. **ai-context** - workflow guidance
|
||||||
|
2. **zero-skills** - implementation patterns
|
||||||
|
3. **mcp-zero** - real-time code generation
|
||||||
|
|
||||||
|
**Example**: Creating a REST API → AI reads **ai-context** for workflow → calls **mcp-zero** to generate code → references **zero-skills** for patterns → produces production-ready code ✅
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. Full examples:
|
||||||
|
|
||||||
[Rapid development of microservice systems](https://github.com/zeromicro/zero-doc/blob/main/doc/shorturl-en.md)
|
[Rapid development of microservice systems](https://github.com/zeromicro/zero-doc/blob/main/doc/shorturl-en.md)
|
||||||
|
|
||||||
@@ -114,24 +142,22 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
|
|
||||||
2. Install goctl
|
2. Install goctl
|
||||||
|
|
||||||
`goctl`can be read as `go control`. `goctl` means not to be controlled by code, instead, we control it. The inside `go` is not `golang`. At the very beginning, I was expecting it to help us improve productivity, and make our lives easier.
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# for Go
|
# for Go
|
||||||
go install github.com/zeromicro/go-zero/tools/goctl@latest
|
go install github.com/zeromicro/go-zero/tools/goctl@latest
|
||||||
|
|
||||||
# For Mac
|
# For Mac
|
||||||
brew install goctl
|
brew install goctl
|
||||||
|
|
||||||
# docker for all platforms
|
# docker for all platforms
|
||||||
docker pull kevinwan/goctl
|
docker pull kevinwan/goctl
|
||||||
# run goctl
|
# run goctl
|
||||||
docker run --rm -it -v `pwd`:/app kevinwan/goctl --help
|
docker run --rm -it -v `pwd`:/app kevinwan/goctl --help
|
||||||
```
|
```
|
||||||
|
|
||||||
make sure goctl is executable and in your $PATH.
|
Ensure goctl is executable and in your $PATH.
|
||||||
|
|
||||||
3. Create the API file, like greet.api, you can install the plugin of goctl in vs code, api syntax is supported.
|
3. Create the API file (greet.api):
|
||||||
|
|
||||||
```go
|
```go
|
||||||
type (
|
type (
|
||||||
@@ -150,19 +176,19 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
the .api files also can be generated by goctl, like below:
|
Generate .api template:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
goctl api -o greet.api
|
goctl api -o greet.api
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Generate the go server-side code
|
4. Generate Go server code
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
goctl api go -api greet.api -dir greet
|
goctl api go -api greet.api -dir greet
|
||||||
```
|
```
|
||||||
|
|
||||||
the generated files look like:
|
Generated structure:
|
||||||
|
|
||||||
```Plain Text
|
```Plain Text
|
||||||
├── greet
|
├── greet
|
||||||
@@ -184,7 +210,7 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
└── greet.api // api description file
|
└── greet.api // api description file
|
||||||
```
|
```
|
||||||
|
|
||||||
the generated code can be run directly:
|
Run the service:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
cd greet
|
cd greet
|
||||||
@@ -192,15 +218,15 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
go run greet.go -f etc/greet-api.yaml
|
go run greet.go -f etc/greet-api.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
by default, it’s listening on port 8888, while it can be changed in the configuration file.
|
Default port: 8888 (configurable in etc/greet-api.yaml)
|
||||||
|
|
||||||
you can check it by curl:
|
Test with curl:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -i http://localhost:8888/greet/from/you
|
curl -i http://localhost:8888/greet/from/you
|
||||||
```
|
```
|
||||||
|
|
||||||
the response looks like below:
|
Response:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
@@ -208,12 +234,12 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
Content-Length: 0
|
Content-Length: 0
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Write the business logic code
|
5. Write business logic
|
||||||
|
|
||||||
* the dependencies can be passed into the logic within servicecontext.go, like mysql, redis, etc.
|
* Pass dependencies (mysql, redis, etc.) via servicecontext.go
|
||||||
* add the logic code in a logic package according to .api file
|
* Add logic code in the logic package per .api definition
|
||||||
|
|
||||||
6. Generate code like Java, TypeScript, Dart, JavaScript, etc. just from the api file
|
6. Generate client code for multiple languages
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
goctl api java -api greet.api -dir greet
|
goctl api java -api greet.api -dir greet
|
||||||
@@ -234,11 +260,11 @@ go get -u github.com/zeromicro/go-zero
|
|||||||
* [Rapid development of microservice systems - multiple RPCs](https://github.com/zeromicro/zero-doc/blob/main/docs/zero/bookstore-en.md)
|
* [Rapid development of microservice systems - multiple RPCs](https://github.com/zeromicro/zero-doc/blob/main/docs/zero/bookstore-en.md)
|
||||||
* [Examples](https://github.com/zeromicro/zero-examples)
|
* [Examples](https://github.com/zeromicro/zero-examples)
|
||||||
|
|
||||||
## Chat group
|
## Chat group
|
||||||
|
|
||||||
Join the chat via https://discord.gg/4JQvC5A4Fe
|
Join the chat via https://discord.gg/4JQvC5A4Fe
|
||||||
|
|
||||||
## Cloud Native Landscape
|
## Cloud Native Landscape
|
||||||
|
|
||||||
<p float="left">
|
<p float="left">
|
||||||
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/cncf-logo.svg" width="200"/>
|
<img src="https://raw.githubusercontent.com/zeromicro/zero-doc/main/doc/images/cncf-logo.svg" width="200"/>
|
||||||
|
|||||||
@@ -389,7 +389,9 @@ func buildSSERoutes(routes []Route) []Route {
|
|||||||
// because SSE requires the connection to be kept alive indefinitely.
|
// because SSE requires the connection to be kept alive indefinitely.
|
||||||
rc := http.NewResponseController(w)
|
rc := http.NewResponseController(w)
|
||||||
if err := rc.SetWriteDeadline(time.Time{}); err != nil {
|
if err := rc.SetWriteDeadline(time.Time{}); err != nil {
|
||||||
logc.Errorf(r.Context(), "set conn write deadline failed: %v", err)
|
// Some ResponseWriter implementations (like timeoutWriter) don't support SetWriteDeadline.
|
||||||
|
// This is expected behavior and doesn't affect SSE functionality.
|
||||||
|
logc.Debugf(r.Context(), "unable to clear write deadline for SSE connection: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set(header.ContentType, header.ContentTypeEventStream)
|
w.Header().Set(header.ContentType, header.ContentTypeEventStream)
|
||||||
|
|||||||
@@ -24,12 +24,16 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
limitBodyBytes = 1024
|
limitBodyBytes = 1024
|
||||||
limitDetailedBodyBytes = 4096
|
limitDetailedBodyBytes = 4096
|
||||||
defaultSlowThreshold = time.Millisecond * 500
|
defaultSlowThreshold = time.Millisecond * 500
|
||||||
|
defaultSSESlowThreshold = time.Minute * 3
|
||||||
)
|
)
|
||||||
|
|
||||||
var slowThreshold = syncx.ForAtomicDuration(defaultSlowThreshold)
|
var (
|
||||||
|
slowThreshold = syncx.ForAtomicDuration(defaultSlowThreshold)
|
||||||
|
sseSlowThreshold = syncx.ForAtomicDuration(defaultSSESlowThreshold)
|
||||||
|
)
|
||||||
|
|
||||||
// LogHandler returns a middleware that logs http request and response.
|
// LogHandler returns a middleware that logs http request and response.
|
||||||
func LogHandler(next http.Handler) http.Handler {
|
func LogHandler(next http.Handler) http.Handler {
|
||||||
@@ -109,6 +113,11 @@ func SetSlowThreshold(threshold time.Duration) {
|
|||||||
slowThreshold.Set(threshold)
|
slowThreshold.Set(threshold)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetSSESlowThreshold sets the slow threshold for SSE requests.
|
||||||
|
func SetSSESlowThreshold(threshold time.Duration) {
|
||||||
|
sseSlowThreshold.Set(threshold)
|
||||||
|
}
|
||||||
|
|
||||||
func dumpRequest(r *http.Request) string {
|
func dumpRequest(r *http.Request) string {
|
||||||
reqContent, err := httputil.DumpRequest(r, true)
|
reqContent, err := httputil.DumpRequest(r, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -118,6 +127,14 @@ func dumpRequest(r *http.Request) string {
|
|||||||
return string(reqContent)
|
return string(reqContent)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getSlowThreshold(r *http.Request) time.Duration {
|
||||||
|
if r.Header.Get(headerAccept) == valueSSE {
|
||||||
|
return sseSlowThreshold.Load()
|
||||||
|
} else {
|
||||||
|
return slowThreshold.Load()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func isOkResponse(code int) bool {
|
func isOkResponse(code int) bool {
|
||||||
// not server error
|
// not server error
|
||||||
return code < http.StatusInternalServerError
|
return code < http.StatusInternalServerError
|
||||||
@@ -129,7 +146,8 @@ func logBrief(r *http.Request, code int, timer *utils.ElapsedTimer, logs *intern
|
|||||||
logger := logx.WithContext(r.Context()).WithDuration(duration)
|
logger := logx.WithContext(r.Context()).WithDuration(duration)
|
||||||
buf.WriteString(fmt.Sprintf("[HTTP] %s - %s %s - %s - %s",
|
buf.WriteString(fmt.Sprintf("[HTTP] %s - %s %s - %s - %s",
|
||||||
wrapStatusCode(code), wrapMethod(r.Method), r.RequestURI, httpx.GetRemoteAddr(r), r.UserAgent()))
|
wrapStatusCode(code), wrapMethod(r.Method), r.RequestURI, httpx.GetRemoteAddr(r), r.UserAgent()))
|
||||||
if duration > slowThreshold.Load() {
|
|
||||||
|
if duration > getSlowThreshold(r) {
|
||||||
logger.Slowf("[HTTP] %s - %s %s - %s - %s - slowcall(%s)",
|
logger.Slowf("[HTTP] %s - %s %s - %s - %s - slowcall(%s)",
|
||||||
wrapStatusCode(code), wrapMethod(r.Method), r.RequestURI, httpx.GetRemoteAddr(r), r.UserAgent(),
|
wrapStatusCode(code), wrapMethod(r.Method), r.RequestURI, httpx.GetRemoteAddr(r), r.UserAgent(),
|
||||||
timex.ReprOfDuration(duration))
|
timex.ReprOfDuration(duration))
|
||||||
@@ -160,7 +178,8 @@ func logDetails(r *http.Request, response *detailLoggedResponseWriter, timer *ut
|
|||||||
logger := logx.WithContext(r.Context())
|
logger := logx.WithContext(r.Context())
|
||||||
buf.WriteString(fmt.Sprintf("[HTTP] %s - %d - %s - %s\n=> %s\n",
|
buf.WriteString(fmt.Sprintf("[HTTP] %s - %d - %s - %s\n=> %s\n",
|
||||||
r.Method, code, r.RemoteAddr, timex.ReprOfDuration(duration), dumpRequest(r)))
|
r.Method, code, r.RemoteAddr, timex.ReprOfDuration(duration), dumpRequest(r)))
|
||||||
if duration > slowThreshold.Load() {
|
|
||||||
|
if duration > getSlowThreshold(r) {
|
||||||
logger.Slowf("[HTTP] %s - %d - %s - slowcall(%s)\n=> %s\n", r.Method, code, r.RemoteAddr,
|
logger.Slowf("[HTTP] %s - %d - %s - slowcall(%s)\n=> %s\n", r.Method, code, r.RemoteAddr,
|
||||||
timex.ReprOfDuration(duration), dumpRequest(r))
|
timex.ReprOfDuration(duration), dumpRequest(r))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -88,6 +88,96 @@ func TestLogHandlerSlow(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestLogHandlerSSE(t *testing.T) {
|
||||||
|
handlers := []func(handler http.Handler) http.Handler{
|
||||||
|
LogHandler,
|
||||||
|
DetailedLogHandler,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, logHandler := range handlers {
|
||||||
|
t.Run("SSE request with normal duration", func(t *testing.T) {
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "http://localhost", http.NoBody)
|
||||||
|
req.Header.Set(headerAccept, valueSSE)
|
||||||
|
|
||||||
|
handler := logHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
time.Sleep(defaultSlowThreshold + time.Second)
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
|
||||||
|
resp := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(resp, req)
|
||||||
|
assert.Equal(t, http.StatusOK, resp.Code)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("SSE request exceeding SSE threshold", func(t *testing.T) {
|
||||||
|
originalThreshold := sseSlowThreshold.Load()
|
||||||
|
SetSSESlowThreshold(time.Millisecond * 100)
|
||||||
|
defer SetSSESlowThreshold(originalThreshold)
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "http://localhost", http.NoBody)
|
||||||
|
req.Header.Set(headerAccept, valueSSE)
|
||||||
|
|
||||||
|
handler := logHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
time.Sleep(time.Millisecond * 150)
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
|
||||||
|
resp := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(resp, req)
|
||||||
|
assert.Equal(t, http.StatusOK, resp.Code)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLogHandlerThresholdSelection(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
acceptHeader string
|
||||||
|
expectedIsSSE bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "Regular HTTP request",
|
||||||
|
acceptHeader: "text/html",
|
||||||
|
expectedIsSSE: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SSE request",
|
||||||
|
acceptHeader: valueSSE,
|
||||||
|
expectedIsSSE: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "No Accept header",
|
||||||
|
acceptHeader: "",
|
||||||
|
expectedIsSSE: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "http://localhost", http.NoBody)
|
||||||
|
if tt.acceptHeader != "" {
|
||||||
|
req.Header.Set(headerAccept, tt.acceptHeader)
|
||||||
|
}
|
||||||
|
|
||||||
|
SetSlowThreshold(time.Millisecond * 100)
|
||||||
|
SetSSESlowThreshold(time.Millisecond * 200)
|
||||||
|
defer func() {
|
||||||
|
SetSlowThreshold(defaultSlowThreshold)
|
||||||
|
SetSSESlowThreshold(defaultSSESlowThreshold)
|
||||||
|
}()
|
||||||
|
|
||||||
|
handler := LogHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
time.Sleep(time.Millisecond * 150)
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
|
||||||
|
resp := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(resp, req)
|
||||||
|
assert.Equal(t, http.StatusOK, resp.Code)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestDetailedLogHandler_LargeBody(t *testing.T) {
|
func TestDetailedLogHandler_LargeBody(t *testing.T) {
|
||||||
lbuf := logtest.NewCollector(t)
|
lbuf := logtest.NewCollector(t)
|
||||||
|
|
||||||
@@ -139,6 +229,12 @@ func TestSetSlowThreshold(t *testing.T) {
|
|||||||
assert.Equal(t, time.Second, slowThreshold.Load())
|
assert.Equal(t, time.Second, slowThreshold.Load())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSetSSESlowThreshold(t *testing.T) {
|
||||||
|
assert.Equal(t, defaultSSESlowThreshold, sseSlowThreshold.Load())
|
||||||
|
SetSSESlowThreshold(time.Minute * 10)
|
||||||
|
assert.Equal(t, time.Minute*10, sseSlowThreshold.Load())
|
||||||
|
}
|
||||||
|
|
||||||
func TestWrapMethodWithColor(t *testing.T) {
|
func TestWrapMethodWithColor(t *testing.T) {
|
||||||
// no tty
|
// no tty
|
||||||
assert.Equal(t, http.MethodGet, wrapMethod(http.MethodGet))
|
assert.Equal(t, http.MethodGet, wrapMethod(http.MethodGet))
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type (
|
type (
|
||||||
// TraceOption defines the method to customize an traceOptions.
|
// TraceOption defines the method to customize a traceOptions.
|
||||||
TraceOption func(options *traceOptions)
|
TraceOption func(options *traceOptions)
|
||||||
|
|
||||||
// traceOptions is TraceHandler options.
|
// traceOptions is TraceHandler options.
|
||||||
|
|||||||
@@ -21,10 +21,11 @@ import (
|
|||||||
|
|
||||||
func TestOtelHandler(t *testing.T) {
|
func TestOtelHandler(t *testing.T) {
|
||||||
ztrace.StartAgent(ztrace.Config{
|
ztrace.StartAgent(ztrace.Config{
|
||||||
Name: "go-zero-test",
|
Name: "go-zero-test",
|
||||||
Endpoint: "http://localhost:14268/api/traces",
|
Endpoint: "http://localhost:14268",
|
||||||
Batcher: "jaeger",
|
OtlpHttpPath: "/v1/traces",
|
||||||
Sampler: 1.0,
|
Batcher: "otlphttp",
|
||||||
|
Sampler: 1.0,
|
||||||
})
|
})
|
||||||
defer ztrace.StopAgent()
|
defer ztrace.StopAgent()
|
||||||
|
|
||||||
@@ -84,10 +85,11 @@ func TestTraceHandler(t *testing.T) {
|
|||||||
|
|
||||||
func TestDontTracingSpan(t *testing.T) {
|
func TestDontTracingSpan(t *testing.T) {
|
||||||
ztrace.StartAgent(ztrace.Config{
|
ztrace.StartAgent(ztrace.Config{
|
||||||
Name: "go-zero-test",
|
Name: "go-zero-test",
|
||||||
Endpoint: "http://localhost:14268/api/traces",
|
Endpoint: "http://localhost:14268",
|
||||||
Batcher: "jaeger",
|
OtlpHttpPath: "/v1/traces",
|
||||||
Sampler: 1.0,
|
Batcher: "otlphttp",
|
||||||
|
Sampler: 1.0,
|
||||||
})
|
})
|
||||||
defer ztrace.StopAgent()
|
defer ztrace.StopAgent()
|
||||||
|
|
||||||
@@ -129,10 +131,11 @@ func TestDontTracingSpan(t *testing.T) {
|
|||||||
|
|
||||||
func TestTraceResponseWriter(t *testing.T) {
|
func TestTraceResponseWriter(t *testing.T) {
|
||||||
ztrace.StartAgent(ztrace.Config{
|
ztrace.StartAgent(ztrace.Config{
|
||||||
Name: "go-zero-test",
|
Name: "go-zero-test",
|
||||||
Endpoint: "http://localhost:14268/api/traces",
|
Endpoint: "http://localhost:14268",
|
||||||
Batcher: "jaeger",
|
OtlpHttpPath: "/v1/traces",
|
||||||
Sampler: 1.0,
|
Batcher: "otlphttp",
|
||||||
|
Sampler: 1.0,
|
||||||
})
|
})
|
||||||
defer ztrace.StopAgent()
|
defer ztrace.StopAgent()
|
||||||
|
|
||||||
|
|||||||
@@ -84,8 +84,11 @@ func buildRequest(ctx context.Context, method, url string, data any) (*http.Requ
|
|||||||
var reader io.Reader
|
var reader io.Reader
|
||||||
jsonVars, hasJsonBody := val[jsonKey]
|
jsonVars, hasJsonBody := val[jsonKey]
|
||||||
if hasJsonBody {
|
if hasJsonBody {
|
||||||
if method == http.MethodGet {
|
switch method {
|
||||||
|
case http.MethodGet:
|
||||||
return nil, ErrGetWithBody
|
return nil, ErrGetWithBody
|
||||||
|
case http.MethodHead:
|
||||||
|
return nil, ErrHeadWithBody
|
||||||
}
|
}
|
||||||
|
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
|
|||||||
@@ -21,10 +21,11 @@ import (
|
|||||||
|
|
||||||
func TestDoRequest(t *testing.T) {
|
func TestDoRequest(t *testing.T) {
|
||||||
ztrace.StartAgent(ztrace.Config{
|
ztrace.StartAgent(ztrace.Config{
|
||||||
Name: "go-zero-test",
|
Name: "go-zero-test",
|
||||||
Endpoint: "http://localhost:14268/api/traces",
|
Endpoint: "http://localhost:14268",
|
||||||
Batcher: "jaeger",
|
OtlpHttpPath: "/v1/traces",
|
||||||
Sampler: 1.0,
|
Batcher: "otlphttp",
|
||||||
|
Sampler: 1.0,
|
||||||
})
|
})
|
||||||
defer ztrace.StopAgent()
|
defer ztrace.StopAgent()
|
||||||
|
|
||||||
@@ -228,3 +229,106 @@ func TestDo_WithClientHttpTrace(t *testing.T) {
|
|||||||
assert.Nil(t, err)
|
assert.Nil(t, err)
|
||||||
assert.True(t, enter)
|
assert.True(t, enter)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestBuildRequestWithBody(t *testing.T) {
|
||||||
|
testBody := struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
Value int `json:"value"`
|
||||||
|
}{
|
||||||
|
Key: "foo",
|
||||||
|
Value: 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
testcases := []struct {
|
||||||
|
testName string
|
||||||
|
method string
|
||||||
|
url string
|
||||||
|
body any
|
||||||
|
wantedErr error
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
testName: "GET Request with Body",
|
||||||
|
method: http.MethodGet,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: ErrGetWithBody,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "GET Request without Body",
|
||||||
|
method: http.MethodGet,
|
||||||
|
url: "/ping",
|
||||||
|
body: nil,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "HEAD Request with Body",
|
||||||
|
method: http.MethodHead,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: ErrHeadWithBody,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "HEAD Request without Body",
|
||||||
|
method: http.MethodHead,
|
||||||
|
url: "/ping",
|
||||||
|
body: nil,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "POST Request with Body",
|
||||||
|
method: http.MethodPost,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "PUT Request with Body",
|
||||||
|
method: http.MethodPut,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "PATCH Request with Body",
|
||||||
|
method: http.MethodPatch,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "DELETE Request with Body",
|
||||||
|
method: http.MethodDelete,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "CONNECT Request with Body",
|
||||||
|
method: http.MethodConnect,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "OPTIONS Request with Body",
|
||||||
|
method: http.MethodOptions,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
testName: "TRACE Request with Body",
|
||||||
|
method: http.MethodTrace,
|
||||||
|
url: "/ping",
|
||||||
|
body: testBody,
|
||||||
|
wantedErr: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testcases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
_, err := buildRequest(context.Background(), tc.method, tc.url, tc.body)
|
||||||
|
assert.Equal(t, tc.wantedErr, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -2,7 +2,10 @@ package httpc
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
|
||||||
"github.com/zeromicro/go-zero/core/breaker"
|
"github.com/zeromicro/go-zero/core/breaker"
|
||||||
)
|
)
|
||||||
@@ -67,8 +70,44 @@ func (s namedService) do(r *http.Request) (resp *http.Response, err error) {
|
|||||||
resp, err = s.cli.Do(r)
|
resp, err = s.cli.Do(r)
|
||||||
return err
|
return err
|
||||||
}, func(err error) bool {
|
}, func(err error) bool {
|
||||||
return err == nil && resp.StatusCode < http.StatusInternalServerError
|
return acceptable(resp, err)
|
||||||
})
|
})
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// acceptable determines whether the HTTP request/response should be considered
|
||||||
|
// successful for circuit breaker purposes.
|
||||||
|
//
|
||||||
|
// Returns true (acceptable) for:
|
||||||
|
// - HTTP status codes < 500 (2xx, 3xx, 4xx)
|
||||||
|
// - Context cancellation (user-initiated)
|
||||||
|
// - Non-network errors (application-level errors)
|
||||||
|
//
|
||||||
|
// Returns false (not acceptable, triggers breaker) for:
|
||||||
|
// - HTTP status codes >= 500 (server errors)
|
||||||
|
// - context.DeadlineExceeded (timeout)
|
||||||
|
// - Network errors (connection refused, DNS failures, etc.)
|
||||||
|
func acceptable(resp *http.Response, err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return resp.StatusCode < http.StatusInternalServerError
|
||||||
|
}
|
||||||
|
|
||||||
|
if errors.Is(err, context.DeadlineExceeded) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if errors.Is(err, context.Canceled) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap url.Error if present
|
||||||
|
var ue *url.Error
|
||||||
|
if errors.As(err, &ue) {
|
||||||
|
err = ue.Unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Network errors are not acceptable
|
||||||
|
var ne net.Error
|
||||||
|
return !errors.As(err, &ne)
|
||||||
|
}
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user