Compare commits
11 Commits
41a12fe7a9
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 0d57aed8ce | |||
| 393b7a144a | |||
| 201f32d095 | |||
| e86d063912 | |||
| 10421b0ec6 | |||
| 08123aa3c5 | |||
| 6c28e4d24a | |||
| 9973631df0 | |||
| a11f452d3b | |||
| 0e3750e755 | |||
| 8693681660 |
@@ -1,482 +0,0 @@
|
|||||||
# Config System Redesign
|
|
||||||
|
|
||||||
## 1. Problem Statement
|
|
||||||
|
|
||||||
`LoadConfig()` panics on the server because the config system assumes a writable
|
|
||||||
config directory. On the server, `OPAL_CONFIG_DIR=/etc/opal` is read-only
|
|
||||||
(systemd `ReadOnlyPaths`), `opal.yml` doesn't exist (Ansible only deploys
|
|
||||||
`opal.env`), and the attempt to create it fails. The nil `*Config` propagates
|
|
||||||
unchecked through `GetConfig()` callers to `BuildUrgencyCoefficients(nil)`,
|
|
||||||
causing a nil-pointer panic.
|
|
||||||
|
|
||||||
### Panic chain
|
|
||||||
|
|
||||||
```
|
|
||||||
sortByUrgency() report.go:426
|
|
||||||
cfg, _ := GetConfig() ← error discarded, cfg = nil
|
|
||||||
coeffs := BuildUrgencyCoefficients(cfg)
|
|
||||||
return &UrgencyCoefficients{
|
|
||||||
Due: cfg.UrgencyDue, ← nil dereference → PANIC
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Root cause
|
|
||||||
|
|
||||||
The config system was designed for the CLI (user-writable `~/.config/opal/`)
|
|
||||||
and has three interacting issues:
|
|
||||||
|
|
||||||
1. **Write-on-read**: `LoadConfig()` creates directories and writes a default
|
|
||||||
`opal.yml` as a side effect of *reading* config. This fails when the
|
|
||||||
filesystem is read-only.
|
|
||||||
2. **Error swallowing**: All internal callers use `cfg, _ := GetConfig()`,
|
|
||||||
turning a load failure into a nil-pointer panic instead of a graceful error.
|
|
||||||
3. **No mode awareness**: The same code path runs for CLI users (writable home
|
|
||||||
dir, interactive, config file is useful) and for the server (read-only
|
|
||||||
`/etc/opal`, headless, defaults are fine).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Current Architecture
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
subgraph "Config Sources"
|
|
||||||
ENV["opal.env<br/>(systemd EnvironmentFile)"]
|
|
||||||
YML["opal.yml<br/>(Viper / YAML)"]
|
|
||||||
DEFAULTS["Hardcoded defaults<br/>(in LoadConfig)"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Loading Mechanisms"
|
|
||||||
AUTH_LOAD["auth.LoadConfig()<br/>os.Getenv() → auth.Config"]
|
|
||||||
ENGINE_LOAD["engine.LoadConfig()<br/>Viper → engine.Config"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Consumers"
|
|
||||||
SERVER["API Server<br/>(handlers, middleware)"]
|
|
||||||
CLI["CLI Commands<br/>(cmd/ package)"]
|
|
||||||
end
|
|
||||||
|
|
||||||
ENV --> AUTH_LOAD
|
|
||||||
YML --> ENGINE_LOAD
|
|
||||||
DEFAULTS --> ENGINE_LOAD
|
|
||||||
|
|
||||||
AUTH_LOAD --> SERVER
|
|
||||||
ENGINE_LOAD --> SERVER
|
|
||||||
ENGINE_LOAD --> CLI
|
|
||||||
```
|
|
||||||
|
|
||||||
### Two independent config subsystems
|
|
||||||
|
|
||||||
| Aspect | `engine.Config` | `auth.Config` |
|
|
||||||
|--------|----------------|---------------|
|
|
||||||
| **Source** | `opal.yml` via Viper | Environment variables |
|
|
||||||
| **Loaded by** | `engine.LoadConfig()` | `auth.LoadConfig()` |
|
|
||||||
| **Caching** | Singleton (`globalConfig`) | None (re-read each call) |
|
|
||||||
| **Write side effects** | Creates dir + file on load | None |
|
|
||||||
| **Error model** | Returns `(*Config, error)` | Returns `*Config` (no error) |
|
|
||||||
| **Used by** | CLI + Server (lazy) | Server only |
|
|
||||||
|
|
||||||
### `engine.Config` field categories
|
|
||||||
|
|
||||||
| Category | Fields | CLI | Server | Notes |
|
|
||||||
|----------|--------|-----|--------|-------|
|
|
||||||
| **Display** | `DefaultFilter`, `DefaultSort`, `DefaultReport`, `ColorOutput` | Yes | No | Terminal-only |
|
|
||||||
| **Date** | `WeekStartDay`, `DefaultDueTime` | Yes | Yes | Used by `ParseDate()` |
|
|
||||||
| **Urgency** | 13 urgency coefficients | Yes | Yes | Core scoring logic |
|
|
||||||
| **Limits** | `NextLimit` | Yes | Indirectly | Report limit |
|
|
||||||
| **Sync** | 6 sync fields | Yes | No | Client-side sync config |
|
|
||||||
|
|
||||||
Key observation: The server only needs **urgency coefficients** and **date
|
|
||||||
settings** from `engine.Config`. It doesn't need display preferences or sync
|
|
||||||
settings. But all 30 fields are loaded through the same mechanism.
|
|
||||||
|
|
||||||
### `engine.LoadConfig()` flow
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Check globalConfig singleton → return if cached
|
|
||||||
2. GetConfigDir() → resolve directory path
|
|
||||||
3. os.MkdirAll(configDir) ← FAILS on read-only FS
|
|
||||||
4. Viper.ReadInConfig()
|
|
||||||
5. If read fails:
|
|
||||||
a. Viper.WriteConfigAs() ← FAILS on read-only FS
|
|
||||||
b. Re-read written file
|
|
||||||
6. Unmarshal → Config struct
|
|
||||||
7. Cache in globalConfig
|
|
||||||
```
|
|
||||||
|
|
||||||
Steps 3 and 5a are the failure points on the server.
|
|
||||||
|
|
||||||
### Caller error handling audit
|
|
||||||
|
|
||||||
| Caller | File:Line | Error handling |
|
|
||||||
|--------|-----------|----------------|
|
|
||||||
| `FormatTaskListWithFormat` | display.go:24 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `formatMinimalLine` | display.go:119 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `NextReport.SortFunc` | report.go:168 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `NextReport.LimitFunc` | report.go:187 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `sortByUrgency` | report.go:426 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `PopulateUrgency` | task.go:769 | `cfg, _ := GetConfig()` — **ignored** |
|
|
||||||
| `getWeekStart` | dateparse.go:484 | Checked — falls back to Monday |
|
|
||||||
| All cmd/ callers | root.go, sync.go | Checked — exits on error |
|
|
||||||
|
|
||||||
Every engine-internal caller except `dateparse.go` discards the error.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Additional Issues Found
|
|
||||||
|
|
||||||
### 3a. Config file clobbering
|
|
||||||
|
|
||||||
`LoadConfig()` line 296-306 catches **any** `ReadInConfig` error (not just
|
|
||||||
"file not found") and overwrites the config file with defaults. A YAML syntax
|
|
||||||
error in the user's `opal.yml` silently destroys their customizations.
|
|
||||||
|
|
||||||
### 3b. `SaveConfig()` manual field sync
|
|
||||||
|
|
||||||
Adding a new config field requires updating three places:
|
|
||||||
1. `Config` struct definition
|
|
||||||
2. `LoadConfig()` — `v.SetDefault(...)` call
|
|
||||||
3. `SaveConfig()` — `v.Set(...)` call
|
|
||||||
|
|
||||||
Missing any one of these causes silent data loss on save or missing defaults.
|
|
||||||
|
|
||||||
### 3c. `auth.LoadConfig()` silent parse failures
|
|
||||||
|
|
||||||
```go
|
|
||||||
jwtExpiry, _ := strconv.Atoi(getEnv("JWT_EXPIRY", "3600"))
|
|
||||||
```
|
|
||||||
|
|
||||||
If `JWT_EXPIRY=abc`, `Atoi` returns `(0, error)`, error is discarded, and
|
|
||||||
`JWTExpiry` silently becomes 0 (tokens expire immediately).
|
|
||||||
|
|
||||||
### 3d. Insecure JWT default
|
|
||||||
|
|
||||||
`JWT_SECRET` defaults to `"change-me-in-production"`. While `validateServerConfig()`
|
|
||||||
checks for empty values, it doesn't check for the insecure default.
|
|
||||||
|
|
||||||
### 3e. Sync API key in plaintext
|
|
||||||
|
|
||||||
`sync_api_key` is stored in `opal.yml` with no special file permissions.
|
|
||||||
|
|
||||||
### 3f. No env-var override of YAML config
|
|
||||||
|
|
||||||
Viper's `AutomaticEnv()` is never called, so there's no way to override YAML
|
|
||||||
config values via environment variables. This is an obstacle for 12-factor
|
|
||||||
server deployments.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Design Options
|
|
||||||
|
|
||||||
### Option A: Minimal fix — graceful fallback to defaults
|
|
||||||
|
|
||||||
**Change**: Make `LoadConfig()` return a `DefaultConfig()` when it can't read
|
|
||||||
or write the config file, instead of returning nil.
|
|
||||||
|
|
||||||
```go
|
|
||||||
func DefaultConfig() *Config {
|
|
||||||
return &Config{
|
|
||||||
DefaultFilter: "status:pending",
|
|
||||||
DefaultSort: "due,priority",
|
|
||||||
// ... all defaults ...
|
|
||||||
UrgencyDue: 12.0,
|
|
||||||
// ...
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func LoadConfig() (*Config, error) {
|
|
||||||
if globalConfig != nil {
|
|
||||||
return globalConfig, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg, err := loadFromFile()
|
|
||||||
if err != nil {
|
|
||||||
// File not available — use defaults (common in server mode)
|
|
||||||
cfg = DefaultConfig()
|
|
||||||
}
|
|
||||||
|
|
||||||
globalConfig = cfg
|
|
||||||
return cfg, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**: Smallest change, fixes the panic, no behavior change for CLI users.
|
|
||||||
**Cons**: Doesn't address the architectural confusion. Error-swallowing callers
|
|
||||||
remain. Write-on-read side effect remains for CLI. Three-source config stays.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Option B: Separate load paths for CLI and server
|
|
||||||
|
|
||||||
**Change**: Split `LoadConfig()` into two functions:
|
|
||||||
- `LoadConfigFromFile()` — current behavior for CLI (read/write YAML)
|
|
||||||
- `LoadConfigFromDefaults()` — returns `DefaultConfig()` for server
|
|
||||||
|
|
||||||
The server startup code calls `LoadConfigFromDefaults()` eagerly during init,
|
|
||||||
populating the singleton before any lazy `GetConfig()` call.
|
|
||||||
|
|
||||||
```go
|
|
||||||
// cmd/server.go — in serverStartCmd.Run, before InitDB()
|
|
||||||
engine.LoadConfigFromDefaults()
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**: Fixes the panic. Server path never touches the filesystem for config.
|
|
||||||
Explicit about which mode is running. CLI behavior unchanged.
|
|
||||||
**Cons**: Two code paths to maintain. Server can't be customized without code
|
|
||||||
changes (urgency tuning, etc.).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Option C: Read-only load with env-var overrides (recommended)
|
|
||||||
|
|
||||||
**Change**: Restructure `LoadConfig()` to:
|
|
||||||
1. Start with hardcoded defaults (always succeeds)
|
|
||||||
2. Layer YAML file on top **if it exists** (read-only, never create)
|
|
||||||
3. Layer environment variables on top (via Viper `AutomaticEnv`)
|
|
||||||
4. Never write to the filesystem as a side effect of loading
|
|
||||||
|
|
||||||
```go
|
|
||||||
func LoadConfig() (*Config, error) {
|
|
||||||
if globalConfig != nil {
|
|
||||||
return globalConfig, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
v := viper.New()
|
|
||||||
v.SetConfigType("yaml")
|
|
||||||
|
|
||||||
// 1. Hardcoded defaults (always present)
|
|
||||||
setDefaults(v)
|
|
||||||
|
|
||||||
// 2. YAML file overlay (optional, read-only)
|
|
||||||
if configPath, err := GetConfigPath(); err == nil {
|
|
||||||
v.SetConfigFile(configPath)
|
|
||||||
if err := v.ReadInConfig(); err != nil {
|
|
||||||
if _, ok := err.(viper.ConfigFileNotFoundError); !ok {
|
|
||||||
// File exists but is malformed — report, don't clobber
|
|
||||||
if _, statErr := os.Stat(configPath); statErr == nil {
|
|
||||||
return nil, fmt.Errorf("config file %s is invalid: %w", configPath, err)
|
|
||||||
}
|
|
||||||
// File doesn't exist — that's fine, use defaults
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Environment variable overlay
|
|
||||||
v.SetEnvPrefix("OPAL")
|
|
||||||
v.AutomaticEnv()
|
|
||||||
|
|
||||||
cfg := &Config{}
|
|
||||||
if err := v.Unmarshal(cfg); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to unmarshal config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
globalConfig = cfg
|
|
||||||
return cfg, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
File creation moves to an explicit `InitConfig()` function, called only during
|
|
||||||
`opal setup` and CLI first-run:
|
|
||||||
|
|
||||||
```go
|
|
||||||
func InitConfig() error {
|
|
||||||
configDir, err := GetConfigDir()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := os.MkdirAll(configDir, 0755); err != nil {
|
|
||||||
return fmt.Errorf("failed to create config directory: %w", err)
|
|
||||||
}
|
|
||||||
return SaveConfig(DefaultConfig())
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Environment variable mapping:
|
|
||||||
|
|
||||||
| YAML key | Env var |
|
|
||||||
|----------|---------|
|
|
||||||
| `urgency_due_coefficient` | `OPAL_URGENCY_DUE_COEFFICIENT` |
|
|
||||||
| `default_filter` | `OPAL_DEFAULT_FILTER` |
|
|
||||||
| `week_start_day` | `OPAL_WEEK_START_DAY` |
|
|
||||||
| ... | `OPAL_<UPPER_SNAKE>` |
|
|
||||||
|
|
||||||
**Pros**:
|
|
||||||
- Fixes the panic — loading never fails fatally (missing file = defaults)
|
|
||||||
- Malformed YAML is reported, not silently clobbered
|
|
||||||
- Server can tune urgency via `opal.env` without deploying `opal.yml`
|
|
||||||
- 12-factor compatible (env vars override everything)
|
|
||||||
- CLI behavior unchanged (reads existing `opal.yml` + env overrides)
|
|
||||||
- Single code path for both CLI and server
|
|
||||||
- No filesystem write side effects during load
|
|
||||||
|
|
||||||
**Cons**:
|
|
||||||
- `SaveConfig()` still needs the manual field-sync (separate improvement)
|
|
||||||
- Slightly more Viper configuration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Option D: Full restructure with typed config sections
|
|
||||||
|
|
||||||
**Change**: Split `Config` into domain-specific sub-configs and unify the
|
|
||||||
loading mechanism for all config (merge `auth.Config` into the same system).
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Config struct {
|
|
||||||
Display DisplayConfig `mapstructure:"display"`
|
|
||||||
Urgency UrgencyConfig `mapstructure:"urgency"`
|
|
||||||
Sync SyncConfig `mapstructure:"sync"`
|
|
||||||
Server ServerConfig `mapstructure:"server"` // absorbs auth.Config
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**: Clean separation. Single config system. Extensible.
|
|
||||||
**Cons**: Large refactor. Breaks existing `opal.yml` format. `auth.Config`
|
|
||||||
works fine as-is for its purpose. Over-engineered for the current problem.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Recommendation
|
|
||||||
|
|
||||||
**Option C** — read-only load with env-var overrides.
|
|
||||||
|
|
||||||
It fixes the immediate panic, eliminates the write-on-read footgun, enables
|
|
||||||
server customization via environment variables, and does it with a focused
|
|
||||||
change to one function. It doesn't over-engineer or restructure things that
|
|
||||||
work.
|
|
||||||
|
|
||||||
### Implementation plan
|
|
||||||
|
|
||||||
#### Step 1: Extract `DefaultConfig()` constructor
|
|
||||||
|
|
||||||
Create a `DefaultConfig()` function that returns a `*Config` with all defaults
|
|
||||||
populated. This is the single source of truth for default values — used by
|
|
||||||
`LoadConfig()`, `InitConfig()`, and `SaveConfig()`.
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`
|
|
||||||
|
|
||||||
#### Step 2: Rewrite `LoadConfig()` to be read-only
|
|
||||||
|
|
||||||
Remove directory creation and file writing. Layer: defaults → YAML (if
|
|
||||||
exists) → env vars. Return `DefaultConfig()` on missing file instead of nil.
|
|
||||||
Return error only on malformed YAML (file exists but can't be parsed).
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`
|
|
||||||
|
|
||||||
#### Step 3: Extract `InitConfig()` for file creation
|
|
||||||
|
|
||||||
Move the "create config directory + write defaults" logic into `InitConfig()`.
|
|
||||||
Call it from `initializeApp()` (CLI first-run path) and `opal setup`.
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`, `cmd/root.go`, `cmd/setup.go`
|
|
||||||
|
|
||||||
#### Step 4: Fix error-swallowing callers
|
|
||||||
|
|
||||||
Two options, from least to most invasive:
|
|
||||||
|
|
||||||
**4a (recommended)**: Make `GetConfig()` never return nil. Since `LoadConfig()`
|
|
||||||
now always returns a valid `*Config` (defaults on failure), `GetConfig()` also
|
|
||||||
always returns non-nil. The `cfg, _ := GetConfig()` pattern becomes safe. The
|
|
||||||
error return is kept for callers that want to distinguish "loaded from file" vs
|
|
||||||
"using defaults" but nil is never returned.
|
|
||||||
|
|
||||||
**4b (defensive)**: Also make `BuildUrgencyCoefficients()` accept nil and return
|
|
||||||
default coefficients. Belt-and-suspenders.
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`, `internal/engine/urgency.go`
|
|
||||||
|
|
||||||
#### Step 5: Enable env-var overrides via Viper
|
|
||||||
|
|
||||||
Add `v.SetEnvPrefix("OPAL")` and `v.AutomaticEnv()` so that any config key
|
|
||||||
can be overridden by `OPAL_<KEY>`. Update `opal.env` in deployment docs to
|
|
||||||
show urgency tuning examples.
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`, `docs/deployment.md`
|
|
||||||
|
|
||||||
#### Step 6: Eliminate `SaveConfig()` field duplication
|
|
||||||
|
|
||||||
Replace the manual `v.Set()` calls with Viper's `mapstructure` round-trip or
|
|
||||||
a reflect-based helper so new fields are automatically included.
|
|
||||||
|
|
||||||
```go
|
|
||||||
func SaveConfig(cfg *Config) error {
|
|
||||||
v := viper.New()
|
|
||||||
v.SetConfigFile(configPath)
|
|
||||||
v.SetConfigType("yaml")
|
|
||||||
|
|
||||||
// Use mapstructure tags to populate viper from struct
|
|
||||||
data, _ := mapstructure.Decode(cfg) // or manual marshal
|
|
||||||
for k, val := range data {
|
|
||||||
v.Set(k, val)
|
|
||||||
}
|
|
||||||
return v.WriteConfig()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Alternatively, marshal to YAML directly with `yaml.Marshal` and write the
|
|
||||||
bytes, bypassing Viper for the write path entirely.
|
|
||||||
|
|
||||||
**Files**: `internal/engine/config.go`
|
|
||||||
|
|
||||||
### Deployment changes
|
|
||||||
|
|
||||||
After this redesign, the Ansible role needs **no changes** — `opal.env` is
|
|
||||||
sufficient. To customize urgency coefficients on the server, add lines to
|
|
||||||
`opal.env`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Server-side urgency tuning (optional)
|
|
||||||
OPAL_URGENCY_DUE_COEFFICIENT=12.0
|
|
||||||
OPAL_URGENCY_AGE_MAX=180
|
|
||||||
```
|
|
||||||
|
|
||||||
No `opal.yml` deployment needed. The server runs on defaults + env overrides.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Technical Decisions
|
|
||||||
|
|
||||||
### ADR-1: Config loading must not write to the filesystem
|
|
||||||
|
|
||||||
- **Context**: `LoadConfig()` creates directories and writes `opal.yml` as a
|
|
||||||
side effect. This fails on read-only filesystems (server) and is surprising
|
|
||||||
behavior for a "load" function.
|
|
||||||
- **Decision**: `LoadConfig()` becomes pure read. File creation moves to
|
|
||||||
`InitConfig()`, called explicitly during setup/first-run.
|
|
||||||
- **Alternatives**: Deploy `opal.yml` via Ansible; make `/etc/opal` writable.
|
|
||||||
- **Consequences**: CLI first-run code must call `InitConfig()` explicitly.
|
|
||||||
`IsFirstRun()` check remains in `initializeApp()`.
|
|
||||||
|
|
||||||
### ADR-2: Missing config file returns defaults, not an error
|
|
||||||
|
|
||||||
- **Context**: On the server, `opal.yml` doesn't exist and doesn't need to.
|
|
||||||
The current code treats this as a fatal error.
|
|
||||||
- **Decision**: Missing file = use defaults silently. Malformed file = return
|
|
||||||
error (don't clobber). `GetConfig()` never returns nil.
|
|
||||||
- **Alternatives**: Require `opal.yml` everywhere; use separate load functions
|
|
||||||
per mode.
|
|
||||||
- **Consequences**: Callers that discard errors (`cfg, _ := GetConfig()`) are
|
|
||||||
now safe. The error return still exists for callers that need to distinguish
|
|
||||||
"file loaded" from "using defaults".
|
|
||||||
|
|
||||||
### ADR-3: Environment variables can override any config key
|
|
||||||
|
|
||||||
- **Context**: The server is configured entirely via env vars (`opal.env` +
|
|
||||||
systemd). Currently urgency coefficients can only be set via `opal.yml`.
|
|
||||||
- **Decision**: Enable Viper's `AutomaticEnv()` with prefix `OPAL_`. Any YAML
|
|
||||||
key `foo_bar` can be overridden by `OPAL_FOO_BAR`.
|
|
||||||
- **Alternatives**: Add a server-specific config file; keep config values
|
|
||||||
hardcoded for server.
|
|
||||||
- **Consequences**: Layered config: defaults < YAML file < env vars. Aligns
|
|
||||||
with 12-factor app principles. Deployment can tune behavior without deploying
|
|
||||||
additional files.
|
|
||||||
|
|
||||||
### ADR-4: Do not merge auth.Config into engine.Config
|
|
||||||
|
|
||||||
- **Context**: `auth.Config` and `engine.Config` are completely independent
|
|
||||||
systems. Merging them would create a single unified config.
|
|
||||||
- **Decision**: Keep them separate. `auth.Config` loads from env vars, works
|
|
||||||
correctly, and is server-only. No reason to change it.
|
|
||||||
- **Alternatives**: Unified config struct with sections.
|
|
||||||
- **Consequences**: Two config types remain. This is acceptable because they
|
|
||||||
serve different purposes, have different lifecycles, and the current
|
|
||||||
`auth.Config` has no bugs.
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,533 @@
|
|||||||
|
# Web CLI Parity — Requirements Spec
|
||||||
|
|
||||||
|
**Status:** Draft
|
||||||
|
**Last updated:** 2026-02-19
|
||||||
|
**Related:** [`cli-ux-improvements.md`](cli-ux-improvements.md) — CLI UX
|
||||||
|
improvements being developed in parallel. Features marked with **(CLI dep)**
|
||||||
|
depend on or benefit from CLI-side work landing first.
|
||||||
|
|
||||||
|
This document covers the features the CLI exposes that the web frontend does
|
||||||
|
not. Each section maps a CLI capability to a proposed web feature, with
|
||||||
|
acceptance criteria and priority.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Problem Statement
|
||||||
|
|
||||||
|
The opal CLI is the primary interface and offers rich task management: in-place
|
||||||
|
editing, start/stop timers, detailed task info, project/tag browsing, bulk
|
||||||
|
operations, and full sync controls. The web frontend currently only supports
|
||||||
|
create, complete, delete, and report-based listing. Users who switch between
|
||||||
|
CLI and web hit a wall — most tasks that go beyond "add" and "done" require
|
||||||
|
falling back to the terminal.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 1 — Core Gaps (the web feels broken without these)
|
||||||
|
|
||||||
|
### 1.1 Task Detail View (`info` equivalent)
|
||||||
|
|
||||||
|
**CLI:** `opal info 2` shows every field on a task — UUID, status, description,
|
||||||
|
urgency, priority, project, all timestamps (created, modified, started, ended,
|
||||||
|
due, scheduled, wait, until), recurrence pattern, parent UUID, and tags.
|
||||||
|
|
||||||
|
**Web gap:** Tapping a task does nothing. There is no way to see a task's full
|
||||||
|
state.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to tap a task to see all of its fields so
|
||||||
|
that I can understand its full context without switching to the CLI.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a task in the list, when I tap it, then a detail view opens showing
|
||||||
|
every non-null field on the task
|
||||||
|
- Given a task with a recurrence pattern, when I view its detail, then the
|
||||||
|
recurrence interval and parent template link are visible
|
||||||
|
- Given a task with scheduled/wait/until dates, when I view its detail, then
|
||||||
|
those dates are displayed with labels
|
||||||
|
- Given a task detail view, when I tap outside or press a close/back control,
|
||||||
|
then the detail view closes and the list is restored
|
||||||
|
|
||||||
|
**Interaction:** Bottom sheet — slides up from the bottom of the screen.
|
||||||
|
Natural on mobile, avoids a route change, and leaves the task list partially
|
||||||
|
visible behind it. Light-dismiss (tap scrim to close).
|
||||||
|
|
||||||
|
**Priority:** MUST
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.2 Task Editing (`modify` / `edit` equivalent)
|
||||||
|
|
||||||
|
**CLI:** `opal 2 modify priority:H due:friday` changes attributes inline.
|
||||||
|
`opal 2 edit` opens `$EDITOR` with all fields in a structured format.
|
||||||
|
|
||||||
|
**Web gap:** There is no way to edit a task after creation. Users must delete
|
||||||
|
and recreate to fix a typo or change a due date.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to edit any field on an existing task so that
|
||||||
|
I can adjust it as my plans change.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a task detail view, when I tap an editable field, then I can modify its
|
||||||
|
value
|
||||||
|
- Given I change a field and confirm, when the update succeeds, then the task
|
||||||
|
list reflects the change
|
||||||
|
- Given I change a field and confirm, when the update fails, then the error is
|
||||||
|
shown and the original value is preserved
|
||||||
|
- Editable fields: description, project, priority, due, scheduled, wait, until,
|
||||||
|
tags, recurrence (on templates)
|
||||||
|
- Read-only fields (displayed but not editable): UUID, created, modified, end,
|
||||||
|
parent UUID, status
|
||||||
|
|
||||||
|
**Implementation notes:**
|
||||||
|
- Uses `PUT /tasks/:uuid` — already exists in the API
|
||||||
|
- For tags, uses `POST /tasks/:uuid/tags` and
|
||||||
|
`DELETE /tasks/:uuid/tags/:tag` — already exist
|
||||||
|
- Field editing could be inline (tap field to edit) or form-based (edit mode
|
||||||
|
that makes all fields editable). Inline feels lighter for single-field tweaks.
|
||||||
|
|
||||||
|
**Priority:** MUST
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.3 Start / Stop Timer (`start` / `stop` equivalent)
|
||||||
|
|
||||||
|
**CLI:** `opal 1 start` marks a task as actively being worked on (sets `start`
|
||||||
|
timestamp). `opal 1 stop` clears it. The `active` report lists started tasks.
|
||||||
|
|
||||||
|
**Web gap:** The API endpoints (`POST /tasks/:uuid/start`,
|
||||||
|
`POST /tasks/:uuid/stop`) are wired in `endpoints.js` but there are no UI
|
||||||
|
controls. The `active` report works in the report picker but users can't
|
||||||
|
actually start tasks.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to mark a task as "in progress" so that I can
|
||||||
|
track what I'm actively working on and see it in the Active report.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a pending task, when I tap a start control, then the task's start time
|
||||||
|
is set and it appears in the Active report
|
||||||
|
- Given a started task, when I tap a stop control, then the start time is
|
||||||
|
cleared
|
||||||
|
- Given a started task, when I view it in the list, then there is a visual
|
||||||
|
indicator that it is active (distinguishable from non-started tasks)
|
||||||
|
- The start/stop action should be accessible from both the task row and the
|
||||||
|
detail view
|
||||||
|
|
||||||
|
**Interaction:** Swipe left to toggle start/stop — mirrors swipe right for
|
||||||
|
complete. Also accessible from the detail view (1.1). No visible button on the
|
||||||
|
task row.
|
||||||
|
|
||||||
|
**Priority:** MUST
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.4 Uncomplete / Revert (`modify status:pending` equivalent)
|
||||||
|
|
||||||
|
**(CLI dep)** — CLI is getting a dedicated `uncomplete` command and a generic
|
||||||
|
`undo` (see [IMP-1](cli-ux-improvements.md#imp-1-undo--uncomplete)). The web
|
||||||
|
feature should align with however the CLI exposes this so the mental model is
|
||||||
|
consistent.
|
||||||
|
|
||||||
|
**CLI:** `opal <id> modify status:pending` reverts a completed task. IMP-1
|
||||||
|
proposes `opal <id> uncomplete` and `opal undo` as dedicated commands.
|
||||||
|
|
||||||
|
**Web gap:** Once a task is completed it disappears from the pending list. The
|
||||||
|
only recovery is the `completed` report + CLI. This is also called out in
|
||||||
|
`BUGS.md` as a missing feature.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to undo an accidental completion so that the
|
||||||
|
task reappears in my pending list.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a completed task (visible in the Completed report), when I tap an
|
||||||
|
uncomplete action, then the task's status reverts to pending and it
|
||||||
|
reappears in the pending list
|
||||||
|
- Given I just completed a task, when I tap undo within a brief window (e.g.
|
||||||
|
toast with undo button), then the task is reverted without navigating to the
|
||||||
|
Completed report
|
||||||
|
|
||||||
|
**Implementation notes:**
|
||||||
|
- Uses `PUT /tasks/:uuid` with `{"status": "pending"}` — works today
|
||||||
|
- The undo toast after completion is a UX nicety but not strictly required for
|
||||||
|
parity
|
||||||
|
- If CLI IMP-1 lands an undo log table, the web could use the same mechanism
|
||||||
|
for a more robust undo (revert any action, not just completion)
|
||||||
|
|
||||||
|
**Priority:** MUST
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.5 Delete Confirmation
|
||||||
|
|
||||||
|
**(CLI dep)** — CLI is improving confirmation prompts to show matched tasks
|
||||||
|
before confirming (see [IMP-3](cli-ux-improvements.md#imp-3-show-matched-tasks-in-confirmations)).
|
||||||
|
The web confirmation should follow the same pattern.
|
||||||
|
|
||||||
|
**CLI:** `opal delete` always prompts `Proceed? (y/N)` before deleting. IMP-3
|
||||||
|
proposes showing the affected task(s) in the confirmation.
|
||||||
|
|
||||||
|
**Web gap:** Delete is instant with no confirmation. There is no undo.
|
||||||
|
|
||||||
|
**User story:** As a user, I want a confirmation before deleting a task so that
|
||||||
|
I don't lose work by accident.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given I trigger a delete action, when the confirmation appears, then I must
|
||||||
|
explicitly confirm before the delete proceeds
|
||||||
|
- Given I dismiss the confirmation, then the task is not deleted
|
||||||
|
- The confirmation should show the task description (matching CLI IMP-3's
|
||||||
|
approach of showing what will be affected)
|
||||||
|
|
||||||
|
**Priority:** MUST
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 2 — CLI Parity (brings the web up to feature-complete)
|
||||||
|
|
||||||
|
### ~~2.1 Projects View~~ — DEFERRED
|
||||||
|
|
||||||
|
### ~~2.2 Tags View~~ — DEFERRED
|
||||||
|
|
||||||
|
Projects and tags browsing is deferred. The existing filter syntax
|
||||||
|
(`project:foo`, `+tag`) covers this adequately for now. When filter
|
||||||
|
autocomplete is added later, discoverability will improve without needing
|
||||||
|
dedicated views.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.3 Display All Date Fields on Task Items
|
||||||
|
|
||||||
|
**CLI:** `opal info` and the table display show scheduled, wait, until, and
|
||||||
|
start dates when present. The `waiting` report makes sense because you can see
|
||||||
|
*when* the wait expires.
|
||||||
|
|
||||||
|
**Web gap:** Only the due date is shown on task rows. Scheduled, wait, until,
|
||||||
|
and start dates are invisible. Users can set them via CLI syntax in the input
|
||||||
|
bar but can't see them afterward.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to see all relevant dates on a task so that I
|
||||||
|
understand when it's scheduled, when it becomes visible, and when it expires.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a task with a `scheduled` date, then a "Scheduled: <date>" indicator
|
||||||
|
appears on the task row
|
||||||
|
- Given a task with a `wait` date, then a "Wait: <date>" indicator appears
|
||||||
|
- Given a task with an `until` date, then an "Until: <date>" indicator appears
|
||||||
|
- Given a task with a `start` time set, then an "Active since <time>" or
|
||||||
|
similar indicator appears
|
||||||
|
- Date fields that are null are not shown (no empty labels)
|
||||||
|
- The detail view (1.1) shows all dates; the task row shows them in a
|
||||||
|
compact/abbreviated form
|
||||||
|
|
||||||
|
**Priority:** SHOULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.4 Recurrence Display and Management
|
||||||
|
|
||||||
|
**(CLI dep)** — CLI is improving recurring task feedback on completion (see
|
||||||
|
[IMP-7](cli-ux-improvements.md#imp-7-recurring-task-feedback)). The web should
|
||||||
|
show equivalent feedback when completing a recurring task.
|
||||||
|
|
||||||
|
**CLI:** `opal add "standup" due:mon recur:1w` creates a recurring template +
|
||||||
|
first instance. `opal template` and `opal recurring` list them. Completing an
|
||||||
|
instance spawns the next one. `opal edit <id>` on an instance can update the
|
||||||
|
template's recurrence pattern.
|
||||||
|
|
||||||
|
**Web gap:** Recurring tasks can be created via CLI syntax in the input bar, and
|
||||||
|
the `recurring`/`template` reports work, but:
|
||||||
|
- There is no visual indicator that a task is a recurring instance
|
||||||
|
- There is no way to see or navigate to the parent template
|
||||||
|
- There is no way to edit the recurrence pattern
|
||||||
|
- Completing a recurring task gives no feedback about the next instance
|
||||||
|
|
||||||
|
**User story:** As a user, I want to see which tasks are recurring, view their
|
||||||
|
schedule, and manage the recurrence pattern so that I can adjust repeating
|
||||||
|
commitments.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a recurring instance, when I view it in the list, then a recurrence
|
||||||
|
icon or badge is visible
|
||||||
|
- Given a recurring instance, when I open its detail view, then the recurrence
|
||||||
|
interval and parent template are shown
|
||||||
|
- Given a recurring template, when I open its detail view, then I can edit the
|
||||||
|
recurrence interval
|
||||||
|
- Given I complete a recurring instance, then the next instance is
|
||||||
|
automatically created (this already works server-side; just verify the list
|
||||||
|
refreshes to show it)
|
||||||
|
- Given I complete a recurring instance, then a toast or inline message shows
|
||||||
|
the next instance's due date (mirrors CLI IMP-7)
|
||||||
|
|
||||||
|
**Priority:** SHOULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~2.5 Sync Controls~~ — DEFERRED
|
||||||
|
|
||||||
|
Sync is deprioritized. The web has no client-side task parsing, so offline
|
||||||
|
functionality is non-functioning. The web fetches directly from the server on
|
||||||
|
every action — sync is only relevant if we move to an offline-first model,
|
||||||
|
which is not planned.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.6 Bulk Operations (`opal 1 2 3 modify ...` / `opal +tag done`)
|
||||||
|
|
||||||
|
**CLI:** Filters and numeric IDs can target multiple tasks.
|
||||||
|
`opal +urgent done` completes all tasks tagged `urgent`.
|
||||||
|
`opal 1 2 3 modify project:sprint-2` moves three tasks at once.
|
||||||
|
The CLI prompts for confirmation when multiple tasks are affected.
|
||||||
|
|
||||||
|
**Web gap:** All actions are single-task only. No multi-select, no batch
|
||||||
|
complete, no batch modify.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to select multiple tasks and perform an
|
||||||
|
action on all of them so that I can manage tasks efficiently.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given I enter selection mode (e.g., long-press a task), then I can tap
|
||||||
|
additional tasks to add them to the selection
|
||||||
|
- Given I have tasks selected, then I can batch-complete, batch-delete, or
|
||||||
|
batch-modify (at minimum: change project, add/remove tag, change priority)
|
||||||
|
- Given a batch action targets 2+ tasks, then a confirmation prompt appears
|
||||||
|
before executing
|
||||||
|
- Given I tap outside the selection or press a cancel control, then selection
|
||||||
|
mode is exited
|
||||||
|
|
||||||
|
**Priority:** SHOULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 3 — Power User & Polish
|
||||||
|
|
||||||
|
### 3.1 Keyboard Shortcuts
|
||||||
|
|
||||||
|
**CLI:** The CLI is entirely keyboard-driven by nature.
|
||||||
|
|
||||||
|
**Web gap:** No keyboard shortcuts exist. Desktop users must use the mouse for
|
||||||
|
everything.
|
||||||
|
|
||||||
|
**User story:** As a desktop user, I want keyboard shortcuts so that I can
|
||||||
|
manage tasks without reaching for the mouse.
|
||||||
|
|
||||||
|
**Suggested bindings:**
|
||||||
|
|
||||||
|
| Key | Action |
|
||||||
|
|-----|--------|
|
||||||
|
| `n` | Focus the input bar (new task) |
|
||||||
|
| `j` / `k` | Move selection down / up in task list |
|
||||||
|
| `x` | Complete selected task |
|
||||||
|
| `e` | Open detail/edit for selected task |
|
||||||
|
| `d` | Delete selected task (with confirmation) |
|
||||||
|
| `s` | Start/stop selected task |
|
||||||
|
| `Escape` | Close detail view / deselect / blur input |
|
||||||
|
| `/` | Open filter modal |
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given I press `n` when not focused on an input, then the input bar is focused
|
||||||
|
- Given I press `j`/`k`, then the visual selection moves through the task list
|
||||||
|
- Given I press `x` with a task selected, then that task is completed
|
||||||
|
- Shortcuts are disabled when an input or textarea is focused (except Escape)
|
||||||
|
|
||||||
|
**Priority:** COULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.2 Description Search
|
||||||
|
|
||||||
|
**CLI:** Filtering by tags, project, priority, and status covers structured
|
||||||
|
attributes, but there's no full-text description search in the CLI either.
|
||||||
|
|
||||||
|
**Web opportunity:** The web could add a search capability that the CLI lacks.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to search tasks by description text so that I
|
||||||
|
can find a specific task without remembering its tags or project.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given I type in a search field, then the task list filters to tasks whose
|
||||||
|
description contains the search text (case-insensitive)
|
||||||
|
- Search can be client-side (filter the loaded report) or server-side (new
|
||||||
|
query param)
|
||||||
|
|
||||||
|
**Interaction:** Extends the existing filter syntax. Bare text in the filter
|
||||||
|
input is interpreted as a description search. The filter modal becomes
|
||||||
|
filter/search — no separate search bar.
|
||||||
|
|
||||||
|
**Priority:** COULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~3.3 Display IDs~~ — DROPPED
|
||||||
|
|
||||||
|
Not needed. The web uses touch/tap interaction, not keyboard-driven ID
|
||||||
|
selection. Display IDs are a CLI affordance that doesn't translate to the web
|
||||||
|
interaction model.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~3.4 API Key Management~~ — ON HOLD
|
||||||
|
|
||||||
|
Blocked on user management investigation. Open questions: Are API keys
|
||||||
|
user-scoped? Should we move to per-user databases? This feature should wait
|
||||||
|
until the auth/user model is settled.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.5 Task Annotations (`annotate` equivalent)
|
||||||
|
|
||||||
|
**(CLI dep)** — The CLI is adding `opal <id> annotate "<text>"` (see
|
||||||
|
[IMP-11](cli-ux-improvements.md#imp-11-task-annotations)). This requires a
|
||||||
|
backend schema change (annotations storage). Once that lands, the web should
|
||||||
|
surface annotations.
|
||||||
|
|
||||||
|
**CLI (proposed):** `opal 3 annotate "Traced to token expiry"` adds a
|
||||||
|
timestamped note. `opal 3 info` shows annotations. IMP-11 notes potential
|
||||||
|
integration with jade-depo (the gems note management system).
|
||||||
|
|
||||||
|
**Web gap:** No concept of annotations exists. Tasks have only a description
|
||||||
|
field for text.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to add and view notes on a task so that I can
|
||||||
|
record progress and context over time.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a task detail view (1.1), then any existing annotations are shown as a
|
||||||
|
timestamped list below the task fields
|
||||||
|
- Given a task detail view, when I tap an "Add note" control, then I can enter
|
||||||
|
annotation text and it is saved with a timestamp
|
||||||
|
- Annotations are ordered newest-first or oldest-first (decide which)
|
||||||
|
- Annotations are read-only after creation (no inline editing — `denotate`
|
||||||
|
removes the latest)
|
||||||
|
|
||||||
|
**Decisions:**
|
||||||
|
- Annotations are visible only in the detail view (1.1), not on the task row.
|
||||||
|
- jade-depo integration is not relevant to the web UI at this time.
|
||||||
|
|
||||||
|
**Priority:** COULD (blocked on CLI IMP-11 landing the backend schema)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.6 Task History (`log` equivalent)
|
||||||
|
|
||||||
|
**(CLI dep)** — The CLI is adding `opal <id> log` (see
|
||||||
|
[IMP-12](cli-ux-improvements.md#imp-12-task-history)). This reads the existing
|
||||||
|
`change_log` table which already exists for sync.
|
||||||
|
|
||||||
|
**CLI (proposed):** `opal 3 log` shows timestamped change history (created,
|
||||||
|
modified, completed, etc.).
|
||||||
|
|
||||||
|
**Web gap:** No way to see what happened to a task over time.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to see a task's change history so that I can
|
||||||
|
understand when and how it was modified.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given a task detail view (1.1), then a "History" section or tab shows the
|
||||||
|
change log for that task
|
||||||
|
- Each entry shows: timestamp, change type, and what changed (e.g.,
|
||||||
|
"priority: default -> high")
|
||||||
|
- History is read-only
|
||||||
|
|
||||||
|
**Implementation notes:**
|
||||||
|
- The `change_log` table already exists for sync. This likely needs a new API
|
||||||
|
endpoint (`GET /tasks/:uuid/log` or similar) to expose it.
|
||||||
|
- Alternatively, the change log could be included in the task detail response.
|
||||||
|
|
||||||
|
**Priority:** COULD (blocked on CLI IMP-12 / API endpoint)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.7 Configurable Default Report
|
||||||
|
|
||||||
|
**CLI:** `default_report` in `opal.yml` controls what `opal` with no arguments
|
||||||
|
shows (default: `list`). `default_filter` controls the base filter.
|
||||||
|
|
||||||
|
**Web gap:** The web always starts on the `list` report. There is no user
|
||||||
|
preference for the landing view.
|
||||||
|
|
||||||
|
**User story:** As a user, I want to choose which report I see when I open the
|
||||||
|
app so that my most-used view loads first.
|
||||||
|
|
||||||
|
**Acceptance criteria:**
|
||||||
|
- Given I choose a default report in settings, then the app opens to that
|
||||||
|
report on next launch
|
||||||
|
- The preference is stored in localStorage (no backend change needed)
|
||||||
|
|
||||||
|
**Priority:** COULD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
|
||||||
|
| # | Requirement | Priority |
|
||||||
|
|---|-------------|----------|
|
||||||
|
| NFR-1 | All new features must work on Android Chrome and desktop Chrome/Firefox (per architecture doc) | MUST |
|
||||||
|
| NFR-2 | Task detail and edit interactions must be touch-friendly (44px minimum tap targets) | MUST |
|
||||||
|
| NFR-3 | Editing a task must be optimistic — UI updates immediately, rolls back on failure | SHOULD |
|
||||||
|
| NFR-4 | Keyboard shortcuts must not conflict with browser defaults (Ctrl+T, etc.) | MUST |
|
||||||
|
| NFR-5 | New features must work with all three themes (Obsidian, Paper, Midnight) | MUST |
|
||||||
|
| NFR-6 | No new API endpoints are required for Tier 1 — all endpoints already exist | N/A |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Constraints & Assumptions
|
||||||
|
|
||||||
|
**Constraints:**
|
||||||
|
- Single-screen architecture per the existing design doc — no new routes for
|
||||||
|
projects/tags (use sheets/modals/filters instead)
|
||||||
|
- Server-side parsing and sorting — the frontend stays a thin shell
|
||||||
|
- SvelteKit + Vite stack, Svelte 5 runes
|
||||||
|
|
||||||
|
**Assumptions:**
|
||||||
|
- The `PUT /tasks/:uuid` endpoint accepts partial updates (only fields present
|
||||||
|
in the request body are changed)
|
||||||
|
- The working set display IDs are not currently exposed via the API; Tier 3.3
|
||||||
|
would require an API change
|
||||||
|
- Sync transport (pull/push) works correctly; the gap is only in applying
|
||||||
|
pulled changes to the store
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cross-Reference: CLI UX Improvements
|
||||||
|
|
||||||
|
The following items from [`cli-ux-improvements.md`](cli-ux-improvements.md)
|
||||||
|
directly affect web features in this spec:
|
||||||
|
|
||||||
|
| CLI IMP | CLI Feature | Web Impact |
|
||||||
|
|---------|-------------|------------|
|
||||||
|
| IMP-1 | Undo / uncomplete | Enables 1.4 (uncomplete). If undo log is stored in DB, web can use same mechanism. |
|
||||||
|
| IMP-2 | Better `add` feedback | Web already shows the created task in-list. No direct web change, but if the API response for `POST /tasks/parse` is enriched (display ID, parsed modifiers), the web could show a richer confirmation toast. |
|
||||||
|
| IMP-3 | Show matched tasks in confirmations | Pattern for 1.5 (delete confirmation) and 2.6 (bulk ops). |
|
||||||
|
| IMP-5 | Handle colons in descriptions | Affects web input bar — same parsing runs server-side via `/tasks/parse`. No web change needed, but web benefits automatically. |
|
||||||
|
| IMP-7 | Recurring task feedback | Directly feeds 2.4 (recurrence display). |
|
||||||
|
| IMP-9 | Relative dates in CLI | Web already does this. No change needed. |
|
||||||
|
| IMP-11 | Task annotations | Enables 3.5 (annotations on web). Blocked on backend schema. |
|
||||||
|
| IMP-12 | Task history | Enables 3.6 (history on web). Blocked on API endpoint. |
|
||||||
|
|
||||||
|
Items from the CLI spec with **no web impact**: IMP-4 (delete ID resolution
|
||||||
|
bug), IMP-6 (consistent error codes), IMP-8 (shell completions), IMP-10
|
||||||
|
(dry-run flag), IMP-13 (version command).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- Collaboration / multi-user sharing
|
||||||
|
- Notifications, reminders, or push alerts
|
||||||
|
- Custom fields or metadata
|
||||||
|
- Drag-to-reorder (ordering is report/urgency-driven)
|
||||||
|
- Offline-first / IndexedDB task storage (tasks are server-side only)
|
||||||
|
- iOS Safari support
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Open Questions Summary
|
||||||
|
|
||||||
|
| # | Question | Blocks | Status |
|
||||||
|
|---|----------|--------|--------|
|
||||||
|
| ~~Q1~~ | ~~Detail view format~~ | ~~1.1~~ | **Resolved** — bottom sheet |
|
||||||
|
| ~~Q2~~ | ~~Start/stop control placement~~ | ~~1.3~~ | **Resolved** — swipe left |
|
||||||
|
| ~~Q3~~ | ~~Projects/tags view format~~ | ~~2.1, 2.2~~ | **Resolved** — deferred, use filters |
|
||||||
|
| ~~Q4~~ | ~~Description search format~~ | ~~3.2~~ | **Resolved** — extend filter syntax |
|
||||||
|
| ~~Q5~~ | ~~Display IDs~~ | ~~3.3~~ | **Resolved** — dropped |
|
||||||
|
| ~~Q6~~ | ~~Annotation visibility~~ | ~~3.5~~ | **Resolved** — detail view only |
|
||||||
|
| ~~Q7~~ | ~~jade-depo integration~~ | ~~3.5~~ | **Resolved** — not for web |
|
||||||
+1
-1
@@ -1 +1 @@
|
|||||||
0.1.0
|
0.2.0
|
||||||
|
|||||||
@@ -0,0 +1,77 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.jnss.me/joakim/opal/internal/engine"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var olderFlag string
|
||||||
|
|
||||||
|
var cleanCmd = &cobra.Command{
|
||||||
|
Use: "clean",
|
||||||
|
Short: "Purge soft-deleted tasks from the database",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
if err := cleanTasks(); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cleanCmd.Flags().StringVar(&olderFlag, "older", "", "Only purge tasks deleted longer than this duration ago (e.g. 30d, 1w)")
|
||||||
|
}
|
||||||
|
|
||||||
|
func cleanTasks() error {
|
||||||
|
var olderThan *time.Duration
|
||||||
|
if olderFlag != "" {
|
||||||
|
d, err := engine.ParseRecurrencePattern(olderFlag)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid duration %q: %w", olderFlag, err)
|
||||||
|
}
|
||||||
|
olderThan = &d
|
||||||
|
}
|
||||||
|
|
||||||
|
tasks, err := engine.GetDeletedTasks(olderThan)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(tasks) == 0 {
|
||||||
|
fmt.Println("No deleted tasks to purge.")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if dryRunFlag {
|
||||||
|
fmt.Printf("Would permanently remove %d deleted task(s).\n", len(tasks))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(tasks) > 1 {
|
||||||
|
fmt.Printf("Permanently remove %d deleted task(s)? This cannot be undone. (y/N): ", len(tasks))
|
||||||
|
var confirm string
|
||||||
|
fmt.Scanln(&confirm)
|
||||||
|
if confirm != "y" && confirm != "Y" {
|
||||||
|
fmt.Println("Cancelled.")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, task := range tasks {
|
||||||
|
if err := task.Delete(true); err != nil {
|
||||||
|
return fmt.Errorf("failed to purge task %s: %w", task.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Purged %d deleted task(s).\n", len(tasks))
|
||||||
|
|
||||||
|
if err := engine.CleanupChangeLog(); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to clean up change log: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -2,14 +2,37 @@ package cmd
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"git.jnss.me/joakim/opal/internal/engine"
|
"git.jnss.me/joakim/opal/internal/engine"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
// taskFilterCompletion provides dynamic completions for task filter arguments.
|
// taskFilterCompletion provides dynamic completions for task filter arguments.
|
||||||
// Suggests +tag and project:name completions from the database.
|
// Suggests +tag, project:name, and attribute value completions from the database.
|
||||||
func taskFilterCompletion(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
|
func taskFilterCompletion(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
|
||||||
|
// If typing a key:value, complete the value part
|
||||||
|
if idx := strings.IndexByte(toComplete, ':'); idx >= 0 {
|
||||||
|
key := toComplete[:idx]
|
||||||
|
if engine.ValidAttributeKeys[key] {
|
||||||
|
return attributeValueCompletions(key, toComplete), cobra.ShellCompDirectiveNoFileComp
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If toComplete is a prefix of an attribute key, return only key:
|
||||||
|
// completions with NoSpace so the cursor stays after the colon.
|
||||||
|
if toComplete != "" && !strings.HasPrefix(toComplete, "+") && !strings.HasPrefix(toComplete, "-") {
|
||||||
|
var keyCompletions []string
|
||||||
|
for key := range engine.ValidAttributeKeys {
|
||||||
|
if strings.HasPrefix(key, toComplete) {
|
||||||
|
keyCompletions = append(keyCompletions, fmt.Sprintf("%s:", key))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(keyCompletions) > 0 {
|
||||||
|
return keyCompletions, cobra.ShellCompDirectiveNoFileComp | cobra.ShellCompDirectiveNoSpace
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var completions []string
|
var completions []string
|
||||||
|
|
||||||
tags, err := engine.GetAllTags()
|
tags, err := engine.GetAllTags()
|
||||||
@@ -31,10 +54,69 @@ func taskFilterCompletion(cmd *cobra.Command, args []string, toComplete string)
|
|||||||
completions = append(completions, fmt.Sprintf("%s:", key))
|
completions = append(completions, fmt.Sprintf("%s:", key))
|
||||||
}
|
}
|
||||||
|
|
||||||
return completions, cobra.ShellCompDirectiveNoFileComp
|
return completions, cobra.ShellCompDirectiveNoFileComp | cobra.ShellCompDirectiveNoSpace
|
||||||
|
}
|
||||||
|
|
||||||
|
// attributeValueCompletions returns key:value completions for a known attribute key.
|
||||||
|
// Cobra filters by prefix automatically, so we return all values prefixed with "key:".
|
||||||
|
func attributeValueCompletions(key, toComplete string) []string {
|
||||||
|
var values []string
|
||||||
|
|
||||||
|
switch key {
|
||||||
|
case "status":
|
||||||
|
values = []string{"pending", "completed", "deleted", "recurring"}
|
||||||
|
case "priority":
|
||||||
|
values = []string{"H", "M", "L"}
|
||||||
|
case "project":
|
||||||
|
projects, err := engine.GetAllProjects()
|
||||||
|
if err == nil {
|
||||||
|
values = projects
|
||||||
|
}
|
||||||
|
case "due", "wait", "scheduled", "until":
|
||||||
|
values = []string{
|
||||||
|
"today", "tomorrow", "yesterday", "now",
|
||||||
|
"eod", "sow", "eow", "som", "eom",
|
||||||
|
"mon", "tue", "wed", "thu", "fri", "sat", "sun",
|
||||||
|
}
|
||||||
|
case "recur":
|
||||||
|
values = []string{"daily", "weekly", "monthly", "yearly", "1d", "1w", "2w", "1m", "1y"}
|
||||||
|
}
|
||||||
|
|
||||||
|
completions := make([]string, 0, len(values))
|
||||||
|
for _, v := range values {
|
||||||
|
completions = append(completions, fmt.Sprintf("%s:%s", key, v))
|
||||||
|
}
|
||||||
|
return completions
|
||||||
|
}
|
||||||
|
|
||||||
|
// rootValidArgsFunction provides completions for root-level arguments,
|
||||||
|
// enabling flexible syntax like "opal 1 de<TAB>" to complete to "delete".
|
||||||
|
func rootValidArgsFunction(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
|
||||||
|
// Delegate to taskFilterCompletion first — if toComplete is a partial
|
||||||
|
// attribute key, it returns early with NoSpace and we should honour that.
|
||||||
|
filterCompletions, directive := taskFilterCompletion(cmd, args, toComplete)
|
||||||
|
|
||||||
|
var completions []string
|
||||||
|
|
||||||
|
// Suggest command names
|
||||||
|
for _, name := range commandNames {
|
||||||
|
completions = append(completions, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Suggest report names
|
||||||
|
for _, name := range reportNames {
|
||||||
|
completions = append(completions, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
completions = append(completions, filterCompletions...)
|
||||||
|
|
||||||
|
return completions, directive
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
// Root command completions for flexible syntax (e.g., "opal 1 de<TAB>")
|
||||||
|
rootCmd.ValidArgsFunction = rootValidArgsFunction
|
||||||
|
|
||||||
// Register dynamic completions for commands that accept filters
|
// Register dynamic completions for commands that accept filters
|
||||||
addCmd.ValidArgsFunction = taskFilterCompletion
|
addCmd.ValidArgsFunction = taskFilterCompletion
|
||||||
doneCmd.ValidArgsFunction = taskFilterCompletion
|
doneCmd.ValidArgsFunction = taskFilterCompletion
|
||||||
|
|||||||
+31
-10
@@ -8,19 +8,25 @@ import (
|
|||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var hardDeleteFlag bool
|
||||||
|
|
||||||
var deleteCmd = &cobra.Command{
|
var deleteCmd = &cobra.Command{
|
||||||
Use: "delete [filter...]",
|
Use: "delete [filter...]",
|
||||||
Short: "Delete tasks",
|
Short: "Delete tasks",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
parsed := getParsedArgs(cmd)
|
parsed := getParsedArgs(cmd)
|
||||||
if err := deleteTasks(parsed.Filters); err != nil {
|
if err := deleteTasks(parsed.Filters, hardDeleteFlag); err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func deleteTasks(args []string) error {
|
func init() {
|
||||||
|
deleteCmd.Flags().BoolVar(&hardDeleteFlag, "hard", false, "Permanently remove task from database")
|
||||||
|
}
|
||||||
|
|
||||||
|
func deleteTasks(args []string, hard bool) error {
|
||||||
filter, err := engine.ParseFilter(args)
|
filter, err := engine.ParseFilter(args)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -53,15 +59,24 @@ func deleteTasks(args []string) error {
|
|||||||
return fmt.Errorf("no tasks matched filter")
|
return fmt.Errorf("no tasks matched filter")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
action := "delete"
|
||||||
|
if hard {
|
||||||
|
action = "permanently delete"
|
||||||
|
}
|
||||||
|
|
||||||
if dryRunFlag {
|
if dryRunFlag {
|
||||||
fmt.Print(engine.FormatTaskConfirmList("delete", tasks, ws))
|
fmt.Print(engine.FormatTaskConfirmList(action, tasks, ws))
|
||||||
fmt.Println("Dry run — no changes made.")
|
fmt.Println("Dry run — no changes made.")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(tasks) > 1 {
|
if len(tasks) > 1 || hard {
|
||||||
fmt.Print(engine.FormatTaskConfirmList("delete", tasks, ws))
|
fmt.Print(engine.FormatTaskConfirmList(action, tasks, ws))
|
||||||
fmt.Printf("Proceed? (y/N): ")
|
if hard {
|
||||||
|
fmt.Printf("This cannot be undone. Proceed? (y/N): ")
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Proceed? (y/N): ")
|
||||||
|
}
|
||||||
var confirm string
|
var confirm string
|
||||||
fmt.Scanln(&confirm)
|
fmt.Scanln(&confirm)
|
||||||
if confirm != "y" && confirm != "Y" {
|
if confirm != "y" && confirm != "Y" {
|
||||||
@@ -71,14 +86,20 @@ func deleteTasks(args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, task := range tasks {
|
for _, task := range tasks {
|
||||||
task.Delete(false) // Soft delete
|
task.Delete(hard)
|
||||||
engine.RecordUndo("delete", task.UUID)
|
if !hard {
|
||||||
|
engine.RecordUndo("delete", task.UUID)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
verb := "Deleted"
|
||||||
|
if hard {
|
||||||
|
verb = "Permanently deleted"
|
||||||
|
}
|
||||||
if len(tasks) == 1 {
|
if len(tasks) == 1 {
|
||||||
fmt.Printf("Deleted task %s\n", engine.FormatTaskSummary(tasks[0], ws))
|
fmt.Printf("%s task %s\n", verb, engine.FormatTaskSummary(tasks[0], ws))
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf("Deleted %d task(s).\n", len(tasks))
|
fmt.Printf("%s %d task(s).\n", verb, len(tasks))
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ var (
|
|||||||
|
|
||||||
// Command classification
|
// Command classification
|
||||||
var commandNames = []string{
|
var commandNames = []string{
|
||||||
"add", "done", "modify", "delete",
|
"add", "done", "modify", "delete", "clean",
|
||||||
"start", "stop", "count", "projects", "tags",
|
"start", "stop", "count", "projects", "tags",
|
||||||
"info", "edit", "server", "sync", "reports", "setup",
|
"info", "edit", "server", "sync", "reports", "setup",
|
||||||
"version", "annotate", "denotate", "undo", "uncomplete", "log", "completion",
|
"version", "annotate", "denotate", "undo", "uncomplete", "log", "completion",
|
||||||
@@ -56,6 +56,7 @@ var rootCmd = &cobra.Command{
|
|||||||
Short: "Opal task manager - taskwarrior-inspired CLI task management",
|
Short: "Opal task manager - taskwarrior-inspired CLI task management",
|
||||||
Long: `Opal is a powerful command-line task manager.
|
Long: `Opal is a powerful command-line task manager.
|
||||||
It supports filtering, tags, priorities, projects, and recurring tasks.`,
|
It supports filtering, tags, priorities, projects, and recurring tasks.`,
|
||||||
|
Args: cobra.ArbitraryArgs,
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
// Default behavior: run configured default report (defaults to "list")
|
// Default behavior: run configured default report (defaults to "list")
|
||||||
parsed := getParsedArgs(cmd)
|
parsed := getParsedArgs(cmd)
|
||||||
@@ -87,6 +88,11 @@ func Execute() error {
|
|||||||
if firstArg == "-h" || firstArg == "--help" || firstArg == "help" {
|
if firstArg == "-h" || firstArg == "--help" || firstArg == "help" {
|
||||||
return rootCmd.Execute()
|
return rootCmd.Execute()
|
||||||
}
|
}
|
||||||
|
// Let Cobra's built-in completion machinery handle shell completions
|
||||||
|
// directly, bypassing preprocessing that would create tasks.
|
||||||
|
if firstArg == "__complete" || firstArg == "__completeNoDesc" {
|
||||||
|
return rootCmd.Execute()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Preprocess arguments (read-only scan — os.Args is never mutated)
|
// Preprocess arguments (read-only scan — os.Args is never mutated)
|
||||||
@@ -278,6 +284,7 @@ func init() {
|
|||||||
undoCmd.GroupID = "task"
|
undoCmd.GroupID = "task"
|
||||||
uncompleteCmd.GroupID = "task"
|
uncompleteCmd.GroupID = "task"
|
||||||
logCmd.GroupID = "task"
|
logCmd.GroupID = "task"
|
||||||
|
cleanCmd.GroupID = "task"
|
||||||
|
|
||||||
rootCmd.AddCommand(addCmd)
|
rootCmd.AddCommand(addCmd)
|
||||||
rootCmd.AddCommand(doneCmd)
|
rootCmd.AddCommand(doneCmd)
|
||||||
@@ -292,6 +299,7 @@ func init() {
|
|||||||
rootCmd.AddCommand(undoCmd)
|
rootCmd.AddCommand(undoCmd)
|
||||||
rootCmd.AddCommand(uncompleteCmd)
|
rootCmd.AddCommand(uncompleteCmd)
|
||||||
rootCmd.AddCommand(logCmd)
|
rootCmd.AddCommand(logCmd)
|
||||||
|
rootCmd.AddCommand(cleanCmd)
|
||||||
|
|
||||||
// Other commands
|
// Other commands
|
||||||
countCmd.GroupID = "other"
|
countCmd.GroupID = "other"
|
||||||
|
|||||||
+52
-61
@@ -224,8 +224,8 @@ var syncUpCmd = &cobra.Command{
|
|||||||
client := sync.NewClient(cfg.SyncURL, cfg.SyncAPIKey, cfg.SyncClientID)
|
client := sync.NewClient(cfg.SyncURL, cfg.SyncAPIKey, cfg.SyncClientID)
|
||||||
|
|
||||||
// Get local changes
|
// Get local changes
|
||||||
lastSync := getLastSyncTime(cfg.SyncClientID)
|
lastSync := sync.GetLastSyncTime(cfg.SyncClientID)
|
||||||
localChanges, err := getLocalChanges(lastSync)
|
localChanges, err := sync.GetLocalChanges(lastSync)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Error getting local changes: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Error getting local changes: %v\n", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
@@ -264,7 +264,7 @@ var syncDownCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
client := sync.NewClient(cfg.SyncURL, cfg.SyncAPIKey, cfg.SyncClientID)
|
client := sync.NewClient(cfg.SyncURL, cfg.SyncAPIKey, cfg.SyncClientID)
|
||||||
lastSync := getLastSyncTime(cfg.SyncClientID)
|
lastSync := sync.GetLastSyncTime(cfg.SyncClientID)
|
||||||
|
|
||||||
changes, err := client.PullChanges(lastSync)
|
changes, err := client.PullChanges(lastSync)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -277,7 +277,55 @@ var syncDownCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Printf("✓ Pulled %d changes from server\n", len(changes))
|
// Parse changes into tasks
|
||||||
|
tasks, err := client.ParseChanges(changes)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error parsing changes: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply each task locally
|
||||||
|
var applied int
|
||||||
|
for _, task := range tasks {
|
||||||
|
if err := task.Save(); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to save task %s: %v\n", task.UUID, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark as sync-originated to prevent feedback loop
|
||||||
|
_ = engine.MarkChangeLogAsSync(task.UUID.String())
|
||||||
|
|
||||||
|
// Sync tags
|
||||||
|
savedTask, err := engine.GetTask(task.UUID)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to reload task %s: %v\n", task.UUID, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
currentTags, _ := savedTask.GetTags()
|
||||||
|
currentSet := make(map[string]bool)
|
||||||
|
for _, tag := range currentTags {
|
||||||
|
currentSet[tag] = true
|
||||||
|
}
|
||||||
|
desiredSet := make(map[string]bool)
|
||||||
|
for _, tag := range task.Tags {
|
||||||
|
desiredSet[tag] = true
|
||||||
|
}
|
||||||
|
for tag := range currentSet {
|
||||||
|
if !desiredSet[tag] {
|
||||||
|
savedTask.RemoveTag(tag)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for tag := range desiredSet {
|
||||||
|
if !currentSet[tag] {
|
||||||
|
savedTask.AddTag(tag)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
applied++
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("✓ Pulled %d changes, applied %d tasks from server\n", len(changes), applied)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -414,63 +462,6 @@ func init() {
|
|||||||
syncCmd.PersistentFlags().BoolVarP(&quietFlag, "quiet", "q", false, "Suppress progress output")
|
syncCmd.PersistentFlags().BoolVarP(&quietFlag, "quiet", "q", false, "Suppress progress output")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper functions
|
|
||||||
|
|
||||||
func getLastSyncTime(clientID string) int64 {
|
|
||||||
db := engine.GetDB()
|
|
||||||
if db == nil {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
var lastSync int64
|
|
||||||
err := db.QueryRow("SELECT last_sync FROM sync_state WHERE client_id = ?", clientID).Scan(&lastSync)
|
|
||||||
if err != nil {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return lastSync
|
|
||||||
}
|
|
||||||
|
|
||||||
func getLocalChanges(since int64) ([]*engine.Task, error) {
|
|
||||||
db := engine.GetDB()
|
|
||||||
if db == nil {
|
|
||||||
return nil, fmt.Errorf("database not initialized")
|
|
||||||
}
|
|
||||||
|
|
||||||
rows, err := db.Query(`
|
|
||||||
SELECT DISTINCT task_uuid
|
|
||||||
FROM change_log
|
|
||||||
WHERE changed_at > ?
|
|
||||||
ORDER BY changed_at ASC
|
|
||||||
`, since)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer rows.Close()
|
|
||||||
|
|
||||||
var tasks []*engine.Task
|
|
||||||
for rows.Next() {
|
|
||||||
var uuidStr string
|
|
||||||
if err := rows.Scan(&uuidStr); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
taskUUID, err := uuid.Parse(uuidStr)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
task, err := engine.GetTask(taskUUID)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
tasks = append(tasks, task)
|
|
||||||
}
|
|
||||||
|
|
||||||
return tasks, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func formatTimestamp(ts int64) string {
|
func formatTimestamp(ts int64) string {
|
||||||
t := time.Unix(ts, 0)
|
t := time.Unix(ts, 0)
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
|
|||||||
@@ -104,6 +104,8 @@ func PushChanges(w http.ResponseWriter, r *http.Request) {
|
|||||||
if err := task.Save(); err != nil {
|
if err := task.Save(); err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
// Mark as sync-originated to prevent feedback loop
|
||||||
|
_ = engine.MarkChangeLogAsSync(task.UUID.String())
|
||||||
// Add tags
|
// Add tags
|
||||||
for _, tag := range task.Tags {
|
for _, tag := range task.Tags {
|
||||||
_ = task.AddTag(tag)
|
_ = task.AddTag(tag)
|
||||||
@@ -114,15 +116,18 @@ func PushChanges(w http.ResponseWriter, r *http.Request) {
|
|||||||
|
|
||||||
// Task exists - check timestamps for conflicts
|
// Task exists - check timestamps for conflicts
|
||||||
if existing.Modified.Unix() > task.Modified.Unix() {
|
if existing.Modified.Unix() > task.Modified.Unix() {
|
||||||
// Server version is newer - conflict (but we'll apply last-write-wins)
|
// Server version is newer - skip this push
|
||||||
conflicts++
|
conflicts++
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply changes (last-write-wins)
|
// Apply changes (client is newer or equal)
|
||||||
task.ID = existing.ID // Preserve database ID
|
task.ID = existing.ID // Preserve database ID
|
||||||
if err := task.Save(); err != nil {
|
if err := task.Save(); err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
// Mark as sync-originated to prevent feedback loop
|
||||||
|
_ = engine.MarkChangeLogAsSync(task.UUID.String())
|
||||||
|
|
||||||
// Sync tags
|
// Sync tags
|
||||||
existingTags := make(map[string]bool)
|
existingTags := make(map[string]bool)
|
||||||
|
|||||||
@@ -130,35 +130,35 @@ func CreateTask(w http.ResponseWriter, r *http.Request) {
|
|||||||
mod.AddTags = req.Tags
|
mod.AddTags = req.Tags
|
||||||
|
|
||||||
if req.Project != nil {
|
if req.Project != nil {
|
||||||
mod.SetAttributes["project"] = req.Project
|
mod.Set("project", req.Project)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Priority != nil {
|
if req.Priority != nil {
|
||||||
mod.SetAttributes["priority"] = req.Priority
|
mod.Set("priority", req.Priority)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Due != nil {
|
if req.Due != nil {
|
||||||
dueStr := fmt.Sprintf("%d", *req.Due)
|
dueStr := fmt.Sprintf("%d", *req.Due)
|
||||||
mod.SetAttributes["due"] = &dueStr
|
mod.Set("due", &dueStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Scheduled != nil {
|
if req.Scheduled != nil {
|
||||||
scheduledStr := fmt.Sprintf("%d", *req.Scheduled)
|
scheduledStr := fmt.Sprintf("%d", *req.Scheduled)
|
||||||
mod.SetAttributes["scheduled"] = &scheduledStr
|
mod.Set("scheduled", &scheduledStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Wait != nil {
|
if req.Wait != nil {
|
||||||
waitStr := fmt.Sprintf("%d", *req.Wait)
|
waitStr := fmt.Sprintf("%d", *req.Wait)
|
||||||
mod.SetAttributes["wait"] = &waitStr
|
mod.Set("wait", &waitStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Until != nil {
|
if req.Until != nil {
|
||||||
untilStr := fmt.Sprintf("%d", *req.Until)
|
untilStr := fmt.Sprintf("%d", *req.Until)
|
||||||
mod.SetAttributes["until"] = &untilStr
|
mod.Set("until", &untilStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Recurrence != nil {
|
if req.Recurrence != nil {
|
||||||
mod.SetAttributes["recurrence"] = req.Recurrence
|
mod.Set("recur", req.Recurrence)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create task
|
// Create task
|
||||||
@@ -232,39 +232,39 @@ func UpdateTask(w http.ResponseWriter, r *http.Request) {
|
|||||||
mod := engine.NewModifier()
|
mod := engine.NewModifier()
|
||||||
|
|
||||||
if req.Description != nil {
|
if req.Description != nil {
|
||||||
mod.SetAttributes["description"] = req.Description
|
mod.Set("description", req.Description)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Status != nil {
|
if req.Status != nil {
|
||||||
mod.SetAttributes["status"] = req.Status
|
mod.Set("status", req.Status)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Priority != nil {
|
if req.Priority != nil {
|
||||||
mod.SetAttributes["priority"] = req.Priority
|
mod.Set("priority", req.Priority)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Project != nil {
|
if req.Project != nil {
|
||||||
mod.SetAttributes["project"] = req.Project
|
mod.Set("project", req.Project)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Due != nil {
|
if req.Due != nil {
|
||||||
dueStr := fmt.Sprintf("%d", *req.Due)
|
dueStr := fmt.Sprintf("%d", *req.Due)
|
||||||
mod.SetAttributes["due"] = &dueStr
|
mod.Set("due", &dueStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Scheduled != nil {
|
if req.Scheduled != nil {
|
||||||
scheduledStr := fmt.Sprintf("%d", *req.Scheduled)
|
scheduledStr := fmt.Sprintf("%d", *req.Scheduled)
|
||||||
mod.SetAttributes["scheduled"] = &scheduledStr
|
mod.Set("scheduled", &scheduledStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Wait != nil {
|
if req.Wait != nil {
|
||||||
waitStr := fmt.Sprintf("%d", *req.Wait)
|
waitStr := fmt.Sprintf("%d", *req.Wait)
|
||||||
mod.SetAttributes["wait"] = &waitStr
|
mod.Set("wait", &waitStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Until != nil {
|
if req.Until != nil {
|
||||||
untilStr := fmt.Sprintf("%d", *req.Until)
|
untilStr := fmt.Sprintf("%d", *req.Until)
|
||||||
mod.SetAttributes["until"] = &untilStr
|
mod.Set("until", &untilStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
if req.Start != nil {
|
if req.Start != nil {
|
||||||
@@ -273,7 +273,7 @@ func UpdateTask(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if req.Recurrence != nil {
|
if req.Recurrence != nil {
|
||||||
mod.SetAttributes["recurrence"] = req.Recurrence
|
mod.Set("recur", req.Recurrence)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply modifier
|
// Apply modifier
|
||||||
|
|||||||
@@ -307,6 +307,13 @@ func runMigrations() error {
|
|||||||
END;
|
END;
|
||||||
`,
|
`,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
version: 2,
|
||||||
|
sql: `
|
||||||
|
ALTER TABLE change_log ADD COLUMN source TEXT NOT NULL DEFAULT 'local';
|
||||||
|
CREATE INDEX idx_change_log_source ON change_log(source);
|
||||||
|
`,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply pending migrations
|
// Apply pending migrations
|
||||||
@@ -327,7 +334,7 @@ func runMigrations() error {
|
|||||||
if _, err := tx.Exec(
|
if _, err := tx.Exec(
|
||||||
"INSERT INTO schema_version (version, applied_at) VALUES (?, ?)",
|
"INSERT INTO schema_version (version, applied_at) VALUES (?, ?)",
|
||||||
migration.version,
|
migration.version,
|
||||||
getCurrentTimestamp(),
|
GetCurrentTimestamp(),
|
||||||
); err != nil {
|
); err != nil {
|
||||||
tx.Rollback()
|
tx.Rollback()
|
||||||
return fmt.Errorf("failed to record migration %d: %w", migration.version, err)
|
return fmt.Errorf("failed to record migration %d: %w", migration.version, err)
|
||||||
@@ -342,14 +349,9 @@ func runMigrations() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// getCurrentTimestamp returns the current Unix timestamp
|
|
||||||
func getCurrentTimestamp() int64 {
|
|
||||||
return timeNow().Unix()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetCurrentTimestamp returns the current Unix timestamp (exported for API use)
|
// GetCurrentTimestamp returns the current Unix timestamp (exported for API use)
|
||||||
func GetCurrentTimestamp() int64 {
|
func GetCurrentTimestamp() int64 {
|
||||||
return getCurrentTimestamp()
|
return timeNow().Unix()
|
||||||
}
|
}
|
||||||
|
|
||||||
// CleanupChangeLog removes old change log entries based on retention policy
|
// CleanupChangeLog removes old change log entries based on retention policy
|
||||||
@@ -409,3 +411,24 @@ func SetChangeLogRetentionDays(days int) error {
|
|||||||
_, err := db.Exec("INSERT OR REPLACE INTO sync_config (key, value) VALUES ('change_log_retention_days', ?)", days)
|
_, err := db.Exec("INSERT OR REPLACE INTO sync_config (key, value) VALUES ('change_log_retention_days', ?)", days)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MarkChangeLogAsSync marks the most recent change_log entry for a task UUID
|
||||||
|
// as originating from sync (not local), preventing the feedback loop where
|
||||||
|
// synced changes get re-pushed as local changes.
|
||||||
|
func MarkChangeLogAsSync(taskUUID string) error {
|
||||||
|
db := GetDB()
|
||||||
|
if db == nil {
|
||||||
|
return fmt.Errorf("database not initialized")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := db.Exec(`
|
||||||
|
UPDATE change_log SET source = 'sync'
|
||||||
|
WHERE id = (
|
||||||
|
SELECT id FROM change_log
|
||||||
|
WHERE task_uuid = ?
|
||||||
|
ORDER BY id DESC
|
||||||
|
LIMIT 1
|
||||||
|
)
|
||||||
|
`, taskUUID)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|||||||
@@ -7,6 +7,21 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var monthNames = map[string]time.Month{
|
||||||
|
"jan": time.January, "january": time.January,
|
||||||
|
"feb": time.February, "february": time.February,
|
||||||
|
"mar": time.March, "march": time.March,
|
||||||
|
"apr": time.April, "april": time.April,
|
||||||
|
"may": time.May,
|
||||||
|
"jun": time.June, "june": time.June,
|
||||||
|
"jul": time.July, "july": time.July,
|
||||||
|
"aug": time.August, "august": time.August,
|
||||||
|
"sep": time.September, "september": time.September,
|
||||||
|
"oct": time.October, "october": time.October,
|
||||||
|
"nov": time.November, "november": time.November,
|
||||||
|
"dec": time.December, "december": time.December,
|
||||||
|
}
|
||||||
|
|
||||||
// DateParser handles all date/time/duration parsing with configurable options
|
// DateParser handles all date/time/duration parsing with configurable options
|
||||||
type DateParser struct {
|
type DateParser struct {
|
||||||
base time.Time
|
base time.Time
|
||||||
@@ -238,22 +253,7 @@ func (p *DateParser) parseWeekday(s string) (time.Time, bool) {
|
|||||||
|
|
||||||
// parseMonthName handles month names (jan, january, feb, february, etc.)
|
// parseMonthName handles month names (jan, january, feb, february, etc.)
|
||||||
func (p *DateParser) parseMonthName(s string) (time.Time, bool) {
|
func (p *DateParser) parseMonthName(s string) (time.Time, bool) {
|
||||||
months := map[string]time.Month{
|
month, ok := monthNames[s]
|
||||||
"jan": time.January, "january": time.January,
|
|
||||||
"feb": time.February, "february": time.February,
|
|
||||||
"mar": time.March, "march": time.March,
|
|
||||||
"apr": time.April, "april": time.April,
|
|
||||||
"may": time.May,
|
|
||||||
"jun": time.June, "june": time.June,
|
|
||||||
"jul": time.July, "july": time.July,
|
|
||||||
"aug": time.August, "august": time.August,
|
|
||||||
"sep": time.September, "september": time.September,
|
|
||||||
"oct": time.October, "october": time.October,
|
|
||||||
"nov": time.November, "november": time.November,
|
|
||||||
"dec": time.December, "december": time.December,
|
|
||||||
}
|
|
||||||
|
|
||||||
month, ok := months[s]
|
|
||||||
if !ok {
|
if !ok {
|
||||||
return time.Time{}, false
|
return time.Time{}, false
|
||||||
}
|
}
|
||||||
@@ -316,22 +316,7 @@ func (p *DateParser) parseDayAndMonth(dayStr, monthStr string) (int, time.Month,
|
|||||||
return 0, 0, false
|
return 0, 0, false
|
||||||
}
|
}
|
||||||
|
|
||||||
months := map[string]time.Month{
|
month, ok := monthNames[monthStr]
|
||||||
"jan": time.January, "january": time.January,
|
|
||||||
"feb": time.February, "february": time.February,
|
|
||||||
"mar": time.March, "march": time.March,
|
|
||||||
"apr": time.April, "april": time.April,
|
|
||||||
"may": time.May,
|
|
||||||
"jun": time.June, "june": time.June,
|
|
||||||
"jul": time.July, "july": time.July,
|
|
||||||
"aug": time.August, "august": time.August,
|
|
||||||
"sep": time.September, "september": time.September,
|
|
||||||
"oct": time.October, "october": time.October,
|
|
||||||
"nov": time.November, "november": time.November,
|
|
||||||
"dec": time.December, "december": time.December,
|
|
||||||
}
|
|
||||||
|
|
||||||
month, ok := months[monthStr]
|
|
||||||
if !ok {
|
if !ok {
|
||||||
return 0, 0, false
|
return 0, 0, false
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -29,15 +29,9 @@ func FormatTaskListWithFormat(tasks []*Task, ws *WorkingSet, format string) stri
|
|||||||
if format == "minimal" {
|
if format == "minimal" {
|
||||||
result := ""
|
result := ""
|
||||||
for i, task := range tasks {
|
for i, task := range tasks {
|
||||||
displayID := i + 1
|
displayID := resolveDisplayID(task, ws)
|
||||||
if ws != nil {
|
if displayID == 0 {
|
||||||
// Use working set display ID if available
|
displayID = i + 1
|
||||||
for id, uuid := range ws.byID {
|
|
||||||
if uuid == task.UUID {
|
|
||||||
displayID = id
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
urgency := task.CalculateUrgency(coeffs)
|
urgency := task.CalculateUrgency(coeffs)
|
||||||
urgencyColor := getUrgencyColor(urgency)
|
urgencyColor := getUrgencyColor(urgency)
|
||||||
@@ -71,15 +65,9 @@ func FormatTaskListWithFormat(tasks []*Task, ws *WorkingSet, format string) stri
|
|||||||
|
|
||||||
// Add rows
|
// Add rows
|
||||||
for i, task := range tasks {
|
for i, task := range tasks {
|
||||||
displayID := i + 1
|
displayID := resolveDisplayID(task, ws)
|
||||||
if ws != nil {
|
if displayID == 0 {
|
||||||
// Use working set display ID if available
|
displayID = i + 1
|
||||||
for id, uuid := range ws.byID {
|
|
||||||
if uuid == task.UUID {
|
|
||||||
displayID = id
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
urgency := task.CalculateUrgency(coeffs)
|
urgency := task.CalculateUrgency(coeffs)
|
||||||
@@ -270,8 +258,6 @@ func FormatTagCounts(tagCounts map[string]int) string {
|
|||||||
return t.Render()
|
return t.Render()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper functions
|
|
||||||
|
|
||||||
func formatStatus(status Status) string {
|
func formatStatus(status Status) string {
|
||||||
switch status {
|
switch status {
|
||||||
case StatusPending:
|
case StatusPending:
|
||||||
@@ -322,7 +308,6 @@ func formatUrgency(urgency float64) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func getUrgencyColor(urgency float64) *color.Color {
|
func getUrgencyColor(urgency float64) *color.Color {
|
||||||
// Returns color for minimal format
|
|
||||||
if urgency >= 10.0 {
|
if urgency >= 10.0 {
|
||||||
return color.New(color.FgHiRed, color.Bold)
|
return color.New(color.FgHiRed, color.Bold)
|
||||||
} else if urgency >= 5.0 {
|
} else if urgency >= 5.0 {
|
||||||
@@ -362,14 +347,6 @@ func formatDue(due *time.Time) string {
|
|||||||
return rel
|
return rel
|
||||||
}
|
}
|
||||||
|
|
||||||
func formatTimeWithColor(t time.Time) string {
|
|
||||||
now := time.Now()
|
|
||||||
if t.Before(now) {
|
|
||||||
return color.RedString(t.Format("2006-01-02 15:04"))
|
|
||||||
}
|
|
||||||
return t.Format("2006-01-02 15:04")
|
|
||||||
}
|
|
||||||
|
|
||||||
func formatTime(t time.Time) string {
|
func formatTime(t time.Time) string {
|
||||||
return t.Format("2006-01-02 15:04")
|
return t.Format("2006-01-02 15:04")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -71,10 +71,8 @@ func (f *Filter) ToSQL() (string, []interface{}) {
|
|||||||
conditions := []string{}
|
conditions := []string{}
|
||||||
args := []interface{}{}
|
args := []interface{}{}
|
||||||
|
|
||||||
// Track if we have an explicit status filter
|
|
||||||
hasStatusFilter := false
|
hasStatusFilter := false
|
||||||
|
|
||||||
// Status filter
|
|
||||||
if status, ok := f.Attributes["status"]; ok {
|
if status, ok := f.Attributes["status"]; ok {
|
||||||
hasStatusFilter = true
|
hasStatusFilter = true
|
||||||
|
|
||||||
@@ -104,13 +102,11 @@ func (f *Filter) ToSQL() (string, []interface{}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Project filter
|
|
||||||
if project, ok := f.Attributes["project"]; ok {
|
if project, ok := f.Attributes["project"]; ok {
|
||||||
conditions = append(conditions, "project = ?")
|
conditions = append(conditions, "project = ?")
|
||||||
args = append(args, project)
|
args = append(args, project)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Priority filter
|
|
||||||
if priority, ok := f.Attributes["priority"]; ok {
|
if priority, ok := f.Attributes["priority"]; ok {
|
||||||
priorityInt := priorityStringToInt(priority)
|
priorityInt := priorityStringToInt(priority)
|
||||||
conditions = append(conditions, "priority = ?")
|
conditions = append(conditions, "priority = ?")
|
||||||
@@ -138,7 +134,6 @@ func (f *Filter) ToSQL() (string, []interface{}) {
|
|||||||
args = append(args, tag)
|
args = append(args, tag)
|
||||||
}
|
}
|
||||||
|
|
||||||
// UUID filter
|
|
||||||
if len(f.UUIDs) > 0 {
|
if len(f.UUIDs) > 0 {
|
||||||
placeholders := strings.Repeat("?,", len(f.UUIDs))
|
placeholders := strings.Repeat("?,", len(f.UUIDs))
|
||||||
placeholders = placeholders[:len(placeholders)-1]
|
placeholders = placeholders[:len(placeholders)-1]
|
||||||
|
|||||||
@@ -4,14 +4,15 @@ package engine
|
|||||||
// Used by parseAddArgs (cmd/add.go), ParseFilter, and ParseModifier
|
// Used by parseAddArgs (cmd/add.go), ParseFilter, and ParseModifier
|
||||||
// to distinguish modifiers from description text.
|
// to distinguish modifiers from description text.
|
||||||
var ValidAttributeKeys = map[string]bool{
|
var ValidAttributeKeys = map[string]bool{
|
||||||
"due": true,
|
"description": true,
|
||||||
"priority": true,
|
"due": true,
|
||||||
"project": true,
|
"priority": true,
|
||||||
"recur": true,
|
"project": true,
|
||||||
"status": true,
|
"recur": true,
|
||||||
"wait": true,
|
"status": true,
|
||||||
"scheduled": true,
|
"wait": true,
|
||||||
"until": true,
|
"scheduled": true,
|
||||||
|
"until": true,
|
||||||
}
|
}
|
||||||
|
|
||||||
// DateKeys is the subset of ValidAttributeKeys that hold date values.
|
// DateKeys is the subset of ValidAttributeKeys that hold date values.
|
||||||
|
|||||||
@@ -23,6 +23,13 @@ func NewModifier() *Modifier {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Set adds an attribute to the modifier, maintaining the SetAttributes and
|
||||||
|
// AttributeOrder invariant. Pass nil to clear the attribute.
|
||||||
|
func (m *Modifier) Set(key string, value *string) {
|
||||||
|
m.SetAttributes[key] = value
|
||||||
|
m.AttributeOrder = append(m.AttributeOrder, key)
|
||||||
|
}
|
||||||
|
|
||||||
// ParseModifier parses command-line args into Modifier.
|
// ParseModifier parses command-line args into Modifier.
|
||||||
// Only recognized attribute keys (ValidAttributeKeys) are accepted;
|
// Only recognized attribute keys (ValidAttributeKeys) are accepted;
|
||||||
// unrecognized key:value tokens produce an error.
|
// unrecognized key:value tokens produce an error.
|
||||||
@@ -86,13 +93,20 @@ func (m *Modifier) Apply(task *Task) error {
|
|||||||
resolvedDates["created"] = task.Created
|
resolvedDates["created"] = task.Created
|
||||||
resolvedDates["modified"] = task.Modified
|
resolvedDates["modified"] = task.Modified
|
||||||
|
|
||||||
|
// Safety net: if SetAttributes were populated without AttributeOrder,
|
||||||
|
// reconstruct order from map keys so updates aren't silently dropped.
|
||||||
|
if len(m.AttributeOrder) == 0 && len(m.SetAttributes) > 0 {
|
||||||
|
for key := range m.SetAttributes {
|
||||||
|
m.AttributeOrder = append(m.AttributeOrder, key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Apply attributes in the order they were specified (important for relative references)
|
// Apply attributes in the order they were specified (important for relative references)
|
||||||
dateKeys := DateKeys
|
dateKeys := DateKeys
|
||||||
|
|
||||||
for _, key := range m.AttributeOrder {
|
for _, key := range m.AttributeOrder {
|
||||||
valuePtr := m.SetAttributes[key]
|
valuePtr := m.SetAttributes[key]
|
||||||
|
|
||||||
// Handle date attributes with relative expression support
|
|
||||||
if dateKeys[key] {
|
if dateKeys[key] {
|
||||||
if err := applyDateAttribute(key, valuePtr, task, resolvedDates); err != nil {
|
if err := applyDateAttribute(key, valuePtr, task, resolvedDates); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -100,30 +114,11 @@ func (m *Modifier) Apply(task *Task) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Handle non-date attributes
|
if err := applyNonDateAttribute(key, valuePtr, task); err != nil {
|
||||||
switch key {
|
return err
|
||||||
case "priority":
|
|
||||||
if valuePtr == nil {
|
|
||||||
task.Priority = PriorityDefault
|
|
||||||
} else {
|
|
||||||
task.Priority = Priority(priorityStringToInt(*valuePtr))
|
|
||||||
}
|
|
||||||
case "project":
|
|
||||||
task.Project = valuePtr
|
|
||||||
case "recur":
|
|
||||||
if valuePtr == nil {
|
|
||||||
task.RecurrenceDuration = nil
|
|
||||||
} else {
|
|
||||||
duration, err := ParseRecurrencePattern(*valuePtr)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid recurrence: %w", err)
|
|
||||||
}
|
|
||||||
task.RecurrenceDuration = &duration
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply tag changes
|
|
||||||
for _, tag := range m.AddTags {
|
for _, tag := range m.AddTags {
|
||||||
if err := task.AddTag(tag); err != nil {
|
if err := task.AddTag(tag); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -154,13 +149,20 @@ func (m *Modifier) ApplyToNew(task *Task) error {
|
|||||||
resolvedDates["created"] = task.Created
|
resolvedDates["created"] = task.Created
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Safety net: if SetAttributes were populated without AttributeOrder,
|
||||||
|
// reconstruct order from map keys so updates aren't silently dropped.
|
||||||
|
if len(m.AttributeOrder) == 0 && len(m.SetAttributes) > 0 {
|
||||||
|
for key := range m.SetAttributes {
|
||||||
|
m.AttributeOrder = append(m.AttributeOrder, key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Apply attributes in the order they were specified (important for relative references)
|
// Apply attributes in the order they were specified (important for relative references)
|
||||||
dateKeys := DateKeys
|
dateKeys := DateKeys
|
||||||
|
|
||||||
for _, key := range m.AttributeOrder {
|
for _, key := range m.AttributeOrder {
|
||||||
valuePtr := m.SetAttributes[key]
|
valuePtr := m.SetAttributes[key]
|
||||||
|
|
||||||
// Handle date attributes with relative expression support
|
|
||||||
if dateKeys[key] {
|
if dateKeys[key] {
|
||||||
if err := applyDateAttribute(key, valuePtr, task, resolvedDates); err != nil {
|
if err := applyDateAttribute(key, valuePtr, task, resolvedDates); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -168,26 +170,8 @@ func (m *Modifier) ApplyToNew(task *Task) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Handle non-date attributes
|
if err := applyNonDateAttribute(key, valuePtr, task); err != nil {
|
||||||
switch key {
|
return err
|
||||||
case "priority":
|
|
||||||
if valuePtr == nil {
|
|
||||||
task.Priority = PriorityDefault
|
|
||||||
} else {
|
|
||||||
task.Priority = Priority(priorityStringToInt(*valuePtr))
|
|
||||||
}
|
|
||||||
case "project":
|
|
||||||
task.Project = valuePtr
|
|
||||||
case "recur":
|
|
||||||
if valuePtr == nil {
|
|
||||||
task.RecurrenceDuration = nil
|
|
||||||
} else {
|
|
||||||
duration, err := ParseRecurrencePattern(*valuePtr)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid recurrence: %w", err)
|
|
||||||
}
|
|
||||||
task.RecurrenceDuration = &duration
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -195,6 +179,39 @@ func (m *Modifier) ApplyToNew(task *Task) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// applyNonDateAttribute applies a non-date attribute to a task.
|
||||||
|
func applyNonDateAttribute(key string, valuePtr *string, task *Task) error {
|
||||||
|
switch key {
|
||||||
|
case "description":
|
||||||
|
if valuePtr != nil {
|
||||||
|
task.Description = *valuePtr
|
||||||
|
}
|
||||||
|
case "status":
|
||||||
|
if valuePtr != nil && len(*valuePtr) > 0 {
|
||||||
|
task.Status = Status((*valuePtr)[0])
|
||||||
|
}
|
||||||
|
case "priority":
|
||||||
|
if valuePtr == nil {
|
||||||
|
task.Priority = PriorityDefault
|
||||||
|
} else {
|
||||||
|
task.Priority = Priority(priorityStringToInt(*valuePtr))
|
||||||
|
}
|
||||||
|
case "project":
|
||||||
|
task.Project = valuePtr
|
||||||
|
case "recur":
|
||||||
|
if valuePtr == nil {
|
||||||
|
task.RecurrenceDuration = nil
|
||||||
|
} else {
|
||||||
|
duration, err := ParseRecurrencePattern(*valuePtr)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid recurrence: %w", err)
|
||||||
|
}
|
||||||
|
task.RecurrenceDuration = &duration
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// parseRelativeExpression checks if a string is a relative date expression
|
// parseRelativeExpression checks if a string is a relative date expression
|
||||||
// Returns: baseAttr, operator, offset, isRelative
|
// Returns: baseAttr, operator, offset, isRelative
|
||||||
// Example: "due-1d" -> "due", "-", "1d", true
|
// Example: "due-1d" -> "due", "-", "1d", true
|
||||||
|
|||||||
@@ -14,20 +14,16 @@ func ParseKeyValueFormat(data string, skipComments bool) (map[string]string, err
|
|||||||
lines := strings.Split(data, "\n")
|
lines := strings.Split(data, "\n")
|
||||||
|
|
||||||
for i, line := range lines {
|
for i, line := range lines {
|
||||||
// Trim whitespace
|
|
||||||
line = strings.TrimSpace(line)
|
line = strings.TrimSpace(line)
|
||||||
|
|
||||||
// Skip empty lines
|
|
||||||
if line == "" {
|
if line == "" {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Skip comments if requested
|
|
||||||
if skipComments && strings.HasPrefix(line, "#") {
|
if skipComments && strings.HasPrefix(line, "#") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Split on first ':'
|
|
||||||
parts := strings.SplitN(line, ":", 2)
|
parts := strings.SplitN(line, ":", 2)
|
||||||
if len(parts) != 2 {
|
if len(parts) != 2 {
|
||||||
return nil, fmt.Errorf("line %d: invalid format (expected 'key:value')", i+1)
|
return nil, fmt.Errorf("line %d: invalid format (expected 'key:value')", i+1)
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ package engine
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"sort"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DisplayFormat defines how tasks should be displayed
|
// DisplayFormat defines how tasks should be displayed
|
||||||
@@ -20,6 +21,7 @@ type Report struct {
|
|||||||
DisplayFormat DisplayFormat // How to display results
|
DisplayFormat DisplayFormat // How to display results
|
||||||
SortFunc func([]*Task) []*Task
|
SortFunc func([]*Task) []*Task
|
||||||
LimitFunc func([]*Task) []*Task
|
LimitFunc func([]*Task) []*Task
|
||||||
|
ShowWaiting bool // If false (default), tasks with future wait dates are hidden
|
||||||
}
|
}
|
||||||
|
|
||||||
// AllReports returns all predefined reports
|
// AllReports returns all predefined reports
|
||||||
@@ -76,6 +78,7 @@ func AllReport() *Report {
|
|||||||
Description: "All tasks",
|
Description: "All tasks",
|
||||||
BaseFilter: filter,
|
BaseFilter: filter,
|
||||||
DisplayFormat: DisplayFormatTable,
|
DisplayFormat: DisplayFormatTable,
|
||||||
|
ShowWaiting: true,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -131,16 +134,11 @@ func NewestReport() *Report {
|
|||||||
BaseFilter: filter,
|
BaseFilter: filter,
|
||||||
DisplayFormat: DisplayFormatTable,
|
DisplayFormat: DisplayFormatTable,
|
||||||
SortFunc: func(tasks []*Task) []*Task {
|
SortFunc: func(tasks []*Task) []*Task {
|
||||||
// Sort by created descending
|
|
||||||
sorted := make([]*Task, len(tasks))
|
sorted := make([]*Task, len(tasks))
|
||||||
copy(sorted, tasks)
|
copy(sorted, tasks)
|
||||||
for i := 0; i < len(sorted)-1; i++ {
|
sort.Slice(sorted, func(i, j int) bool {
|
||||||
for j := i + 1; j < len(sorted); j++ {
|
return sorted[i].Created.After(sorted[j].Created)
|
||||||
if sorted[i].Created.Before(sorted[j].Created) {
|
})
|
||||||
sorted[i], sorted[j] = sorted[j], sorted[i]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return sorted
|
return sorted
|
||||||
},
|
},
|
||||||
LimitFunc: func(tasks []*Task) []*Task {
|
LimitFunc: func(tasks []*Task) []*Task {
|
||||||
@@ -164,23 +162,14 @@ func NextReport() *Report {
|
|||||||
BaseFilter: filter,
|
BaseFilter: filter,
|
||||||
DisplayFormat: DisplayFormatTable,
|
DisplayFormat: DisplayFormatTable,
|
||||||
SortFunc: func(tasks []*Task) []*Task {
|
SortFunc: func(tasks []*Task) []*Task {
|
||||||
// Sort by urgency descending
|
|
||||||
cfg, _ := GetConfig()
|
cfg, _ := GetConfig()
|
||||||
coeffs := BuildUrgencyCoefficients(cfg)
|
coeffs := BuildUrgencyCoefficients(cfg)
|
||||||
|
|
||||||
sorted := make([]*Task, len(tasks))
|
sorted := make([]*Task, len(tasks))
|
||||||
copy(sorted, tasks)
|
copy(sorted, tasks)
|
||||||
|
sort.Slice(sorted, func(i, j int) bool {
|
||||||
for i := 0; i < len(sorted)-1; i++ {
|
return sorted[i].CalculateUrgency(coeffs) > sorted[j].CalculateUrgency(coeffs)
|
||||||
for j := i + 1; j < len(sorted); j++ {
|
})
|
||||||
urgI := sorted[i].CalculateUrgency(coeffs)
|
|
||||||
urgJ := sorted[j].CalculateUrgency(coeffs)
|
|
||||||
if urgI < urgJ {
|
|
||||||
sorted[i], sorted[j] = sorted[j], sorted[i]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return sorted
|
return sorted
|
||||||
},
|
},
|
||||||
LimitFunc: func(tasks []*Task) []*Task {
|
LimitFunc: func(tasks []*Task) []*Task {
|
||||||
@@ -208,16 +197,11 @@ func OldestReport() *Report {
|
|||||||
BaseFilter: filter,
|
BaseFilter: filter,
|
||||||
DisplayFormat: DisplayFormatTable,
|
DisplayFormat: DisplayFormatTable,
|
||||||
SortFunc: func(tasks []*Task) []*Task {
|
SortFunc: func(tasks []*Task) []*Task {
|
||||||
// Sort by created ascending (already default, but explicit)
|
|
||||||
sorted := make([]*Task, len(tasks))
|
sorted := make([]*Task, len(tasks))
|
||||||
copy(sorted, tasks)
|
copy(sorted, tasks)
|
||||||
for i := 0; i < len(sorted)-1; i++ {
|
sort.Slice(sorted, func(i, j int) bool {
|
||||||
for j := i + 1; j < len(sorted); j++ {
|
return sorted[i].Created.Before(sorted[j].Created)
|
||||||
if sorted[i].Created.After(sorted[j].Created) {
|
})
|
||||||
sorted[i], sorted[j] = sorted[j], sorted[i]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return sorted
|
return sorted
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -291,6 +275,7 @@ func WaitingReport() *Report {
|
|||||||
Description: "Hidden/waiting tasks",
|
Description: "Hidden/waiting tasks",
|
||||||
BaseFilter: filter,
|
BaseFilter: filter,
|
||||||
DisplayFormat: DisplayFormatTable,
|
DisplayFormat: DisplayFormatTable,
|
||||||
|
ShowWaiting: true,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -340,6 +325,12 @@ func (r *Report) applyPostFilters(tasks []*Task) []*Task {
|
|||||||
for _, task := range tasks {
|
for _, task := range tasks {
|
||||||
include := true
|
include := true
|
||||||
|
|
||||||
|
// By default, hide tasks with a future wait date (like taskwarrior).
|
||||||
|
// Reports that need to show waiting tasks set ShowWaiting = true.
|
||||||
|
if !r.ShowWaiting && task.Wait != nil && task.Wait.After(now) {
|
||||||
|
include = false
|
||||||
|
}
|
||||||
|
|
||||||
// Check for _started marker
|
// Check for _started marker
|
||||||
if r.BaseFilter.Attributes["_started"] == "true" {
|
if r.BaseFilter.Attributes["_started"] == "true" {
|
||||||
if task.Start == nil {
|
if task.Start == nil {
|
||||||
@@ -429,18 +420,13 @@ func sortByUrgency(tasks []*Task) []*Task {
|
|||||||
sorted := make([]*Task, len(tasks))
|
sorted := make([]*Task, len(tasks))
|
||||||
copy(sorted, tasks)
|
copy(sorted, tasks)
|
||||||
|
|
||||||
// Calculate and store urgency on each task
|
|
||||||
for _, t := range sorted {
|
for _, t := range sorted {
|
||||||
t.Urgency = t.CalculateUrgency(coeffs)
|
t.Urgency = t.CalculateUrgency(coeffs)
|
||||||
}
|
}
|
||||||
|
|
||||||
for i := 0; i < len(sorted)-1; i++ {
|
sort.Slice(sorted, func(i, j int) bool {
|
||||||
for j := i + 1; j < len(sorted); j++ {
|
return sorted[i].Urgency > sorted[j].Urgency
|
||||||
if sorted[i].Urgency < sorted[j].Urgency {
|
})
|
||||||
sorted[i], sorted[j] = sorted[j], sorted[i]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return sorted
|
return sorted
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -83,30 +83,31 @@ type Task struct {
|
|||||||
Urgency float64 `json:"urgency"`
|
Urgency float64 `json:"urgency"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// taskJSON is the wire format for Task, using unix timestamps instead of time.Time.
|
||||||
|
type taskJSON struct {
|
||||||
|
UUID uuid.UUID `json:"uuid"`
|
||||||
|
ID int `json:"id"`
|
||||||
|
Status Status `json:"status"`
|
||||||
|
Description string `json:"description"`
|
||||||
|
Project *string `json:"project"`
|
||||||
|
Priority Priority `json:"priority"`
|
||||||
|
Created int64 `json:"created"`
|
||||||
|
Modified int64 `json:"modified"`
|
||||||
|
Start *int64 `json:"start,omitempty"`
|
||||||
|
End *int64 `json:"end,omitempty"`
|
||||||
|
Due *int64 `json:"due,omitempty"`
|
||||||
|
Scheduled *int64 `json:"scheduled,omitempty"`
|
||||||
|
Wait *int64 `json:"wait,omitempty"`
|
||||||
|
Until *int64 `json:"until,omitempty"`
|
||||||
|
RecurrenceDuration *int64 `json:"recurrence_duration,omitempty"`
|
||||||
|
ParentUUID *uuid.UUID `json:"parent_uuid,omitempty"`
|
||||||
|
Annotations []Annotation `json:"annotations,omitempty"`
|
||||||
|
Tags []string `json:"tags"`
|
||||||
|
Urgency float64 `json:"urgency"`
|
||||||
|
}
|
||||||
|
|
||||||
// MarshalJSON emits Task with unix timestamps (int64) instead of RFC3339 strings.
|
// MarshalJSON emits Task with unix timestamps (int64) instead of RFC3339 strings.
|
||||||
func (t Task) MarshalJSON() ([]byte, error) {
|
func (t Task) MarshalJSON() ([]byte, error) {
|
||||||
type taskJSON struct {
|
|
||||||
UUID uuid.UUID `json:"uuid"`
|
|
||||||
ID int `json:"id"`
|
|
||||||
Status Status `json:"status"`
|
|
||||||
Description string `json:"description"`
|
|
||||||
Project *string `json:"project"`
|
|
||||||
Priority Priority `json:"priority"`
|
|
||||||
Created int64 `json:"created"`
|
|
||||||
Modified int64 `json:"modified"`
|
|
||||||
Start *int64 `json:"start,omitempty"`
|
|
||||||
End *int64 `json:"end,omitempty"`
|
|
||||||
Due *int64 `json:"due,omitempty"`
|
|
||||||
Scheduled *int64 `json:"scheduled,omitempty"`
|
|
||||||
Wait *int64 `json:"wait,omitempty"`
|
|
||||||
Until *int64 `json:"until,omitempty"`
|
|
||||||
RecurrenceDuration *int64 `json:"recurrence_duration,omitempty"`
|
|
||||||
ParentUUID *uuid.UUID `json:"parent_uuid,omitempty"`
|
|
||||||
Annotations []Annotation `json:"annotations,omitempty"`
|
|
||||||
Tags []string `json:"tags"`
|
|
||||||
Urgency float64 `json:"urgency"`
|
|
||||||
}
|
|
||||||
|
|
||||||
toUnix := func(tp *time.Time) *int64 {
|
toUnix := func(tp *time.Time) *int64 {
|
||||||
if tp == nil {
|
if tp == nil {
|
||||||
return nil
|
return nil
|
||||||
@@ -146,28 +147,6 @@ func (t Task) MarshalJSON() ([]byte, error) {
|
|||||||
|
|
||||||
// UnmarshalJSON parses Task from JSON with unix timestamps (int64) and duration in seconds.
|
// UnmarshalJSON parses Task from JSON with unix timestamps (int64) and duration in seconds.
|
||||||
func (t *Task) UnmarshalJSON(data []byte) error {
|
func (t *Task) UnmarshalJSON(data []byte) error {
|
||||||
type taskJSON struct {
|
|
||||||
UUID uuid.UUID `json:"uuid"`
|
|
||||||
ID int `json:"id"`
|
|
||||||
Status Status `json:"status"`
|
|
||||||
Description string `json:"description"`
|
|
||||||
Project *string `json:"project"`
|
|
||||||
Priority Priority `json:"priority"`
|
|
||||||
Created int64 `json:"created"`
|
|
||||||
Modified int64 `json:"modified"`
|
|
||||||
Start *int64 `json:"start,omitempty"`
|
|
||||||
End *int64 `json:"end,omitempty"`
|
|
||||||
Due *int64 `json:"due,omitempty"`
|
|
||||||
Scheduled *int64 `json:"scheduled,omitempty"`
|
|
||||||
Wait *int64 `json:"wait,omitempty"`
|
|
||||||
Until *int64 `json:"until,omitempty"`
|
|
||||||
RecurrenceDuration *int64 `json:"recurrence_duration,omitempty"`
|
|
||||||
ParentUUID *uuid.UUID `json:"parent_uuid,omitempty"`
|
|
||||||
Annotations []Annotation `json:"annotations,omitempty"`
|
|
||||||
Tags []string `json:"tags"`
|
|
||||||
Urgency float64 `json:"urgency"`
|
|
||||||
}
|
|
||||||
|
|
||||||
var raw taskJSON
|
var raw taskJSON
|
||||||
if err := json.Unmarshal(data, &raw); err != nil {
|
if err := json.Unmarshal(data, &raw); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -366,21 +345,13 @@ func CreateTaskWithModifier(description string, mod *Modifier) (*Task, error) {
|
|||||||
return task, nil
|
return task, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTask retrieves a task by UUID
|
// scanner is satisfied by both *sql.Row and *sql.Rows.
|
||||||
func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
type scanner interface {
|
||||||
db := GetDB()
|
Scan(dest ...interface{}) error
|
||||||
if db == nil {
|
}
|
||||||
return nil, fmt.Errorf("database not initialized")
|
|
||||||
}
|
|
||||||
|
|
||||||
query := `
|
|
||||||
SELECT id, uuid, status, description, project, priority,
|
|
||||||
created, modified, start, end, due, scheduled, wait, until_date,
|
|
||||||
recurrence_duration, parent_uuid, annotations
|
|
||||||
FROM tasks
|
|
||||||
WHERE uuid = ?
|
|
||||||
`
|
|
||||||
|
|
||||||
|
// scanTask reads a single task row from a scanner and populates all fields including tags.
|
||||||
|
func scanTask(s scanner) (*Task, error) {
|
||||||
task := &Task{}
|
task := &Task{}
|
||||||
var (
|
var (
|
||||||
uuidStr string
|
uuidStr string
|
||||||
@@ -398,7 +369,7 @@ func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
|||||||
annotationsStr interface{}
|
annotationsStr interface{}
|
||||||
)
|
)
|
||||||
|
|
||||||
err := db.QueryRow(query, taskUUID.String()).Scan(
|
err := s.Scan(
|
||||||
&task.ID,
|
&task.ID,
|
||||||
&uuidStr,
|
&uuidStr,
|
||||||
&task.Status,
|
&task.Status,
|
||||||
@@ -417,22 +388,18 @@ func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
|||||||
&parentUUIDStr,
|
&parentUUIDStr,
|
||||||
&annotationsStr,
|
&annotationsStr,
|
||||||
)
|
)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to get task: %w", err)
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse UUID
|
|
||||||
task.UUID, err = uuid.Parse(uuidStr)
|
task.UUID, err = uuid.Parse(uuidStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to parse UUID: %w", err)
|
return nil, fmt.Errorf("failed to parse UUID: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert timestamps
|
|
||||||
task.Created = time.Unix(created, 0)
|
task.Created = time.Unix(created, 0)
|
||||||
task.Modified = time.Unix(modified, 0)
|
task.Modified = time.Unix(modified, 0)
|
||||||
|
|
||||||
// Convert nullable fields
|
|
||||||
task.Project = sqlToStringPtr(project)
|
task.Project = sqlToStringPtr(project)
|
||||||
task.Start = sqlToTime(start)
|
task.Start = sqlToTime(start)
|
||||||
task.End = sqlToTime(end)
|
task.End = sqlToTime(end)
|
||||||
@@ -444,7 +411,6 @@ func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
|||||||
task.ParentUUID = sqlToUUIDPtr(parentUUIDStr)
|
task.ParentUUID = sqlToUUIDPtr(parentUUIDStr)
|
||||||
task.Annotations = sqlToAnnotations(annotationsStr)
|
task.Annotations = sqlToAnnotations(annotationsStr)
|
||||||
|
|
||||||
// Load tags
|
|
||||||
tags, err := task.GetTags()
|
tags, err := task.GetTags()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to load tags: %w", err)
|
return nil, fmt.Errorf("failed to load tags: %w", err)
|
||||||
@@ -454,6 +420,29 @@ func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
|||||||
return task, nil
|
return task, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetTask retrieves a task by UUID
|
||||||
|
func GetTask(taskUUID uuid.UUID) (*Task, error) {
|
||||||
|
db := GetDB()
|
||||||
|
if db == nil {
|
||||||
|
return nil, fmt.Errorf("database not initialized")
|
||||||
|
}
|
||||||
|
|
||||||
|
query := `
|
||||||
|
SELECT id, uuid, status, description, project, priority,
|
||||||
|
created, modified, start, end, due, scheduled, wait, until_date,
|
||||||
|
recurrence_duration, parent_uuid, annotations
|
||||||
|
FROM tasks
|
||||||
|
WHERE uuid = ?
|
||||||
|
`
|
||||||
|
|
||||||
|
task, err := scanTask(db.QueryRow(query, taskUUID.String()))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to get task: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return task, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetTasks retrieves all tasks with optional filtering
|
// GetTasks retrieves all tasks with optional filtering
|
||||||
func GetTasks(filter *Filter) ([]*Task, error) {
|
func GetTasks(filter *Filter) ([]*Task, error) {
|
||||||
db := GetDB()
|
db := GetDB()
|
||||||
@@ -489,76 +478,10 @@ func GetTasks(filter *Filter) ([]*Task, error) {
|
|||||||
tasks := []*Task{}
|
tasks := []*Task{}
|
||||||
|
|
||||||
for rows.Next() {
|
for rows.Next() {
|
||||||
task := &Task{}
|
task, err := scanTask(rows)
|
||||||
var (
|
|
||||||
uuidStr string
|
|
||||||
project interface{}
|
|
||||||
created int64
|
|
||||||
modified int64
|
|
||||||
start interface{}
|
|
||||||
end interface{}
|
|
||||||
due interface{}
|
|
||||||
scheduled interface{}
|
|
||||||
wait interface{}
|
|
||||||
until interface{}
|
|
||||||
recurDuration interface{}
|
|
||||||
parentUUIDStr interface{}
|
|
||||||
annotationsStr interface{}
|
|
||||||
)
|
|
||||||
|
|
||||||
err := rows.Scan(
|
|
||||||
&task.ID,
|
|
||||||
&uuidStr,
|
|
||||||
&task.Status,
|
|
||||||
&task.Description,
|
|
||||||
&project,
|
|
||||||
&task.Priority,
|
|
||||||
&created,
|
|
||||||
&modified,
|
|
||||||
&start,
|
|
||||||
&end,
|
|
||||||
&due,
|
|
||||||
&scheduled,
|
|
||||||
&wait,
|
|
||||||
&until,
|
|
||||||
&recurDuration,
|
|
||||||
&parentUUIDStr,
|
|
||||||
&annotationsStr,
|
|
||||||
)
|
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to scan task: %w", err)
|
return nil, fmt.Errorf("failed to scan task: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse UUID
|
|
||||||
task.UUID, err = uuid.Parse(uuidStr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse UUID: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert timestamps
|
|
||||||
task.Created = time.Unix(created, 0)
|
|
||||||
task.Modified = time.Unix(modified, 0)
|
|
||||||
|
|
||||||
// Convert nullable fields
|
|
||||||
task.Project = sqlToStringPtr(project)
|
|
||||||
task.Start = sqlToTime(start)
|
|
||||||
task.End = sqlToTime(end)
|
|
||||||
task.Due = sqlToTime(due)
|
|
||||||
task.Scheduled = sqlToTime(scheduled)
|
|
||||||
task.Wait = sqlToTime(wait)
|
|
||||||
task.Until = sqlToTime(until)
|
|
||||||
task.RecurrenceDuration = sqlToDuration(recurDuration)
|
|
||||||
task.ParentUUID = sqlToUUIDPtr(parentUUIDStr)
|
|
||||||
task.Annotations = sqlToAnnotations(annotationsStr)
|
|
||||||
|
|
||||||
// Load tags
|
|
||||||
tags, err := task.GetTags()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to load tags: %w", err)
|
|
||||||
}
|
|
||||||
task.Tags = tags
|
|
||||||
|
|
||||||
tasks = append(tasks, task)
|
tasks = append(tasks, task)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -813,6 +736,48 @@ func (t *Task) IsRecurringInstance() bool {
|
|||||||
return t.ParentUUID != nil
|
return t.ParentUUID != nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetDeletedTasks retrieves soft-deleted tasks, optionally filtered by age.
|
||||||
|
// If olderThan is non-nil, only returns tasks deleted more than that duration ago.
|
||||||
|
func GetDeletedTasks(olderThan *time.Duration) ([]*Task, error) {
|
||||||
|
db := GetDB()
|
||||||
|
if db == nil {
|
||||||
|
return nil, fmt.Errorf("database not initialized")
|
||||||
|
}
|
||||||
|
|
||||||
|
query := `
|
||||||
|
SELECT id, uuid, status, description, project, priority,
|
||||||
|
created, modified, start, end, due, scheduled, wait, until_date,
|
||||||
|
recurrence_duration, parent_uuid, annotations
|
||||||
|
FROM tasks
|
||||||
|
WHERE status = ?`
|
||||||
|
args := []interface{}{byte(StatusDeleted)}
|
||||||
|
|
||||||
|
if olderThan != nil {
|
||||||
|
cutoff := timeNow().Add(-*olderThan).Unix()
|
||||||
|
query += ` AND end < ?`
|
||||||
|
args = append(args, cutoff)
|
||||||
|
}
|
||||||
|
|
||||||
|
query += ` ORDER BY end ASC`
|
||||||
|
|
||||||
|
rows, err := db.Query(query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to query deleted tasks: %w", err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
var tasks []*Task
|
||||||
|
for rows.Next() {
|
||||||
|
task, err := scanTask(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to scan task: %w", err)
|
||||||
|
}
|
||||||
|
tasks = append(tasks, task)
|
||||||
|
}
|
||||||
|
|
||||||
|
return tasks, nil
|
||||||
|
}
|
||||||
|
|
||||||
// PopulateUrgency computes and sets the Urgency field on the given tasks.
|
// PopulateUrgency computes and sets the Urgency field on the given tasks.
|
||||||
func PopulateUrgency(tasks ...*Task) {
|
func PopulateUrgency(tasks ...*Task) {
|
||||||
cfg, _ := GetConfig()
|
cfg, _ := GetConfig()
|
||||||
|
|||||||
@@ -102,14 +102,20 @@ func (c *Client) PullChanges(since int64) ([]ChangeLogEntry, error) {
|
|||||||
func (c *Client) PushChanges(tasks []*engine.Task) error {
|
func (c *Client) PushChanges(tasks []*engine.Task) error {
|
||||||
// Convert tasks to JSON
|
// Convert tasks to JSON
|
||||||
var taskData []json.RawMessage
|
var taskData []json.RawMessage
|
||||||
|
var marshalErrors []string
|
||||||
for _, task := range tasks {
|
for _, task := range tasks {
|
||||||
data, err := json.Marshal(task)
|
data, err := json.Marshal(task)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
marshalErrors = append(marshalErrors, fmt.Sprintf("task %s: %v", task.UUID, err))
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
taskData = append(taskData, data)
|
taskData = append(taskData, data)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(taskData) == 0 && len(marshalErrors) > 0 {
|
||||||
|
return fmt.Errorf("all tasks failed to marshal: %s", strings.Join(marshalErrors, "; "))
|
||||||
|
}
|
||||||
|
|
||||||
reqBody := map[string]interface{}{
|
reqBody := map[string]interface{}{
|
||||||
"tasks": taskData,
|
"tasks": taskData,
|
||||||
"client_id": c.clientID,
|
"client_id": c.clientID,
|
||||||
@@ -139,6 +145,11 @@ func (c *Client) PushChanges(tasks []*engine.Task) error {
|
|||||||
return fmt.Errorf("server returned %d: %s", resp.StatusCode, string(body))
|
return fmt.Errorf("server returned %d: %s", resp.StatusCode, string(body))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(marshalErrors) > 0 {
|
||||||
|
return fmt.Errorf("pushed %d tasks but %d failed to marshal: %s",
|
||||||
|
len(taskData), len(marshalErrors), strings.Join(marshalErrors, "; "))
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -219,7 +230,7 @@ func (c *Client) Sync(strategy ConflictResolution, reporter ProgressReporter) (*
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Convert changes to tasks
|
// Convert changes to tasks
|
||||||
remoteTasks, err := c.parseChanges(changes)
|
remoteTasks, err := c.ParseChanges(changes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if len(changes) > 0 {
|
if len(changes) > 0 {
|
||||||
reporter.CompletePhase()
|
reporter.CompletePhase()
|
||||||
@@ -283,6 +294,11 @@ func (c *Client) Sync(strategy ConflictResolution, reporter ProgressReporter) (*
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Mark change_log entry as sync-originated to prevent feedback loop
|
||||||
|
if err := engine.MarkChangeLogAsSync(task.UUID.String()); err != nil {
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("failed to mark change as sync for %s: %v", task.UUID, err))
|
||||||
|
}
|
||||||
|
|
||||||
// Reload task to ensure we have the database ID
|
// Reload task to ensure we have the database ID
|
||||||
savedTask, err := engine.GetTask(task.UUID)
|
savedTask, err := engine.GetTask(task.UUID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -347,18 +363,7 @@ func (c *Client) Sync(strategy ConflictResolution, reporter ProgressReporter) (*
|
|||||||
|
|
||||||
// getLastSyncTime retrieves the last sync timestamp from database
|
// getLastSyncTime retrieves the last sync timestamp from database
|
||||||
func (c *Client) getLastSyncTime() int64 {
|
func (c *Client) getLastSyncTime() int64 {
|
||||||
db := engine.GetDB()
|
return GetLastSyncTime(c.clientID)
|
||||||
if db == nil {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
var lastSync int64
|
|
||||||
err := db.QueryRow("SELECT last_sync FROM sync_state WHERE client_id = ?", c.clientID).Scan(&lastSync)
|
|
||||||
if err != nil {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return lastSync
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// updateLastSyncTime updates the last sync timestamp
|
// updateLastSyncTime updates the last sync timestamp
|
||||||
@@ -376,6 +381,27 @@ func (c *Client) updateLastSyncTime(timestamp int64) {
|
|||||||
|
|
||||||
// getLocalChanges retrieves local changes since a timestamp
|
// getLocalChanges retrieves local changes since a timestamp
|
||||||
func (c *Client) getLocalChanges(since int64) ([]*engine.Task, error) {
|
func (c *Client) getLocalChanges(since int64) ([]*engine.Task, error) {
|
||||||
|
return GetLocalChanges(since)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLastSyncTime retrieves the last sync timestamp for a client ID from the database.
|
||||||
|
func GetLastSyncTime(clientID string) int64 {
|
||||||
|
db := engine.GetDB()
|
||||||
|
if db == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
var lastSync int64
|
||||||
|
err := db.QueryRow("SELECT last_sync FROM sync_state WHERE client_id = ?", clientID).Scan(&lastSync)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return lastSync
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLocalChanges retrieves local (non-sync-originated) changes since a timestamp.
|
||||||
|
func GetLocalChanges(since int64) ([]*engine.Task, error) {
|
||||||
db := engine.GetDB()
|
db := engine.GetDB()
|
||||||
if db == nil {
|
if db == nil {
|
||||||
return nil, fmt.Errorf("database not initialized")
|
return nil, fmt.Errorf("database not initialized")
|
||||||
@@ -384,7 +410,7 @@ func (c *Client) getLocalChanges(since int64) ([]*engine.Task, error) {
|
|||||||
rows, err := db.Query(`
|
rows, err := db.Query(`
|
||||||
SELECT DISTINCT task_uuid
|
SELECT DISTINCT task_uuid
|
||||||
FROM change_log
|
FROM change_log
|
||||||
WHERE changed_at > ?
|
WHERE changed_at > ? AND source = 'local'
|
||||||
ORDER BY changed_at ASC
|
ORDER BY changed_at ASC
|
||||||
`, since)
|
`, since)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -415,8 +441,8 @@ func (c *Client) getLocalChanges(since int64) ([]*engine.Task, error) {
|
|||||||
return tasks, nil
|
return tasks, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseChanges converts change log entries to tasks
|
// ParseChanges converts change log entries to tasks
|
||||||
func (c *Client) parseChanges(changes []ChangeLogEntry) ([]*engine.Task, error) {
|
func (c *Client) ParseChanges(changes []ChangeLogEntry) ([]*engine.Task, error) {
|
||||||
// Sort changes by timestamp (primary) and ID (secondary) to ensure correct order
|
// Sort changes by timestamp (primary) and ID (secondary) to ensure correct order
|
||||||
// This handles same-second updates (e.g., CREATE followed by UPDATE with tags)
|
// This handles same-second updates (e.g., CREATE followed by UPDATE with tags)
|
||||||
sort.Slice(changes, func(i, j int) bool {
|
sort.Slice(changes, func(i, j int) bool {
|
||||||
@@ -666,16 +692,31 @@ func parseTagsFromChangeLog(s string) []string {
|
|||||||
// pushQueuedChanges sends queued changes to server
|
// pushQueuedChanges sends queued changes to server
|
||||||
func (c *Client) pushQueuedChanges(changes []QueuedChange) error {
|
func (c *Client) pushQueuedChanges(changes []QueuedChange) error {
|
||||||
var tasks []*engine.Task
|
var tasks []*engine.Task
|
||||||
|
var unmarshalErrors []string
|
||||||
|
|
||||||
for _, change := range changes {
|
for _, change := range changes {
|
||||||
var task engine.Task
|
var task engine.Task
|
||||||
if err := json.Unmarshal(change.Data, &task); err != nil {
|
if err := json.Unmarshal(change.Data, &task); err != nil {
|
||||||
|
unmarshalErrors = append(unmarshalErrors, fmt.Sprintf("queued change: %v", err))
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
tasks = append(tasks, &task)
|
tasks = append(tasks, &task)
|
||||||
}
|
}
|
||||||
|
|
||||||
return c.PushChanges(tasks)
|
if len(tasks) == 0 && len(unmarshalErrors) > 0 {
|
||||||
|
return fmt.Errorf("all queued changes failed to unmarshal: %s", strings.Join(unmarshalErrors, "; "))
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := c.PushChanges(tasks); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(unmarshalErrors) > 0 {
|
||||||
|
return fmt.Errorf("pushed %d tasks but %d queued changes failed to unmarshal: %s",
|
||||||
|
len(tasks), len(unmarshalErrors), strings.Join(unmarshalErrors, "; "))
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncResult represents the result of a sync operation
|
// SyncResult represents the result of a sync operation
|
||||||
|
|||||||
@@ -57,7 +57,11 @@ func MergeTasks(local, remote []*engine.Task, strategy ConflictResolution) ([]*e
|
|||||||
if DetectConflict(task, remoteTask) {
|
if DetectConflict(task, remoteTask) {
|
||||||
conflicts++
|
conflicts++
|
||||||
winner := resolveConflict(task, remoteTask, strategy)
|
winner := resolveConflict(task, remoteTask, strategy)
|
||||||
logConflict(task, remoteTask, winner)
|
winnerLabel := "local"
|
||||||
|
if winner == remoteTask {
|
||||||
|
winnerLabel = "remote"
|
||||||
|
}
|
||||||
|
logConflict(task, remoteTask, winnerLabel)
|
||||||
result = append(result, winner)
|
result = append(result, winner)
|
||||||
} else {
|
} else {
|
||||||
// No conflict - use either (same content)
|
// No conflict - use either (same content)
|
||||||
@@ -110,17 +114,12 @@ func resolveConflict(local, remote *engine.Task, strategy ConflictResolution) *e
|
|||||||
}
|
}
|
||||||
|
|
||||||
// logConflict writes conflict information to log file
|
// logConflict writes conflict information to log file
|
||||||
func logConflict(local, remote *engine.Task, winner *engine.Task) {
|
func logConflict(local, remote *engine.Task, winnerLabel string) {
|
||||||
logPath, err := engine.GetSyncConflictLogPath()
|
logPath, err := engine.GetSyncConflictLogPath()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
winnerLabel := "local"
|
|
||||||
if winner.UUID == remote.UUID && winner.Modified.Equal(remote.Modified) {
|
|
||||||
winnerLabel = "remote"
|
|
||||||
}
|
|
||||||
|
|
||||||
entry := fmt.Sprintf(
|
entry := fmt.Sprintf(
|
||||||
"[%s] Conflict on task %s\n"+
|
"[%s] Conflict on task %s\n"+
|
||||||
" Local: modified %s - %s\n"+
|
" Local: modified %s - %s\n"+
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import { get } from 'svelte/store';
|
|||||||
* @typedef {import('./types.js').AuthTokens} AuthTokens
|
* @typedef {import('./types.js').AuthTokens} AuthTokens
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const API_BASE = import.meta.env.VITE_API_URL || 'http://localhost:8080';
|
export const API_BASE = import.meta.env.VITE_API_URL || 'http://localhost:8080';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Make authenticated API request
|
* Make authenticated API request
|
||||||
@@ -18,12 +18,12 @@ const API_BASE = import.meta.env.VITE_API_URL || 'http://localhost:8080';
|
|||||||
export async function apiRequest(endpoint, options = {}) {
|
export async function apiRequest(endpoint, options = {}) {
|
||||||
const auth = get(authStore);
|
const auth = get(authStore);
|
||||||
|
|
||||||
|
/** @type {Record<string, string>} */
|
||||||
const headers = {
|
const headers = {
|
||||||
'Content-Type': 'application/json',
|
'Content-Type': 'application/json',
|
||||||
...options.headers
|
.../** @type {Record<string, string>} */ (options.headers)
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add auth token if available
|
|
||||||
if (auth.accessToken) {
|
if (auth.accessToken) {
|
||||||
headers['Authorization'] = `Bearer ${auth.accessToken}`;
|
headers['Authorization'] = `Bearer ${auth.accessToken}`;
|
||||||
}
|
}
|
||||||
@@ -34,11 +34,9 @@ export async function apiRequest(endpoint, options = {}) {
|
|||||||
headers
|
headers
|
||||||
});
|
});
|
||||||
|
|
||||||
// Token expired - try refresh
|
|
||||||
if (response.status === 401 && auth.refreshToken) {
|
if (response.status === 401 && auth.refreshToken) {
|
||||||
const refreshed = await refreshAccessToken(auth.refreshToken);
|
const refreshed = await refreshAccessToken(auth.refreshToken);
|
||||||
if (refreshed) {
|
if (refreshed) {
|
||||||
// Retry with new token
|
|
||||||
headers['Authorization'] = `Bearer ${refreshed.access_token}`;
|
headers['Authorization'] = `Bearer ${refreshed.access_token}`;
|
||||||
return apiRequest(endpoint, { ...options, headers });
|
return apiRequest(endpoint, { ...options, headers });
|
||||||
}
|
}
|
||||||
@@ -78,7 +76,6 @@ async function refreshAccessToken(refreshToken) {
|
|||||||
|
|
||||||
const result = await response.json();
|
const result = await response.json();
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
// Update auth store
|
|
||||||
authStore.setTokens(result.data);
|
authStore.setTokens(result.data);
|
||||||
return result.data;
|
return result.data;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { apiRequest } from './client.js';
|
import { apiRequest, API_BASE } from './client.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @typedef {import('./types.js').Task} Task
|
* @typedef {import('./types.js').Task} Task
|
||||||
@@ -8,12 +8,8 @@ import { apiRequest } from './client.js';
|
|||||||
* @typedef {import('./types.js').User} User
|
* @typedef {import('./types.js').User} User
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const API_BASE = import.meta.env.VITE_API_URL || 'http://localhost:8080';
|
|
||||||
|
|
||||||
// Tasks API
|
|
||||||
export const tasks = {
|
export const tasks = {
|
||||||
/**
|
/**
|
||||||
* List all tasks with optional filters
|
|
||||||
* @param {TaskFilters} [filters]
|
* @param {TaskFilters} [filters]
|
||||||
* @returns {Promise<Task[]>}
|
* @returns {Promise<Task[]>}
|
||||||
*/
|
*/
|
||||||
@@ -33,20 +29,12 @@ export const tasks = {
|
|||||||
return apiRequest(`/tasks${query ? `?${query}` : ''}`);
|
return apiRequest(`/tasks${query ? `?${query}` : ''}`);
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid @returns {Promise<Task>} */
|
||||||
* Get single task by UUID
|
|
||||||
* @param {string} uuid
|
|
||||||
* @returns {Promise<Task>}
|
|
||||||
*/
|
|
||||||
async get(uuid) {
|
async get(uuid) {
|
||||||
return apiRequest(`/tasks/${uuid}`);
|
return apiRequest(`/tasks/${uuid}`);
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {Partial<Task>} task @returns {Promise<Task>} */
|
||||||
* Create new task
|
|
||||||
* @param {Partial<Task>} task
|
|
||||||
* @returns {Promise<Task>}
|
|
||||||
*/
|
|
||||||
async create(task) {
|
async create(task) {
|
||||||
return apiRequest('/tasks', {
|
return apiRequest('/tasks', {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
@@ -55,7 +43,6 @@ export const tasks = {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update existing task
|
|
||||||
* @param {string} uuid
|
* @param {string} uuid
|
||||||
* @param {Partial<Task>} updates
|
* @param {Partial<Task>} updates
|
||||||
* @returns {Promise<Task>}
|
* @returns {Promise<Task>}
|
||||||
@@ -67,46 +54,29 @@ export const tasks = {
|
|||||||
});
|
});
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid @returns {Promise<void>} */
|
||||||
* Delete task
|
|
||||||
* @param {string} uuid
|
|
||||||
* @returns {Promise<void>}
|
|
||||||
*/
|
|
||||||
async delete(uuid) {
|
async delete(uuid) {
|
||||||
return apiRequest(`/tasks/${uuid}`, { method: 'DELETE' });
|
return apiRequest(`/tasks/${uuid}`, { method: 'DELETE' });
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid @returns {Promise<Task>} */
|
||||||
* Complete task
|
|
||||||
* @param {string} uuid
|
|
||||||
* @returns {Promise<Task>}
|
|
||||||
*/
|
|
||||||
async complete(uuid) {
|
async complete(uuid) {
|
||||||
return apiRequest(`/tasks/${uuid}/complete`, { method: 'POST' });
|
return apiRequest(`/tasks/${uuid}/complete`, { method: 'POST' });
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid @returns {Promise<Task>} */
|
||||||
* Start task timer
|
|
||||||
* @param {string} uuid
|
|
||||||
* @returns {Promise<Task>}
|
|
||||||
*/
|
|
||||||
async start(uuid) {
|
async start(uuid) {
|
||||||
return apiRequest(`/tasks/${uuid}/start`, { method: 'POST' });
|
return apiRequest(`/tasks/${uuid}/start`, { method: 'POST' });
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid @returns {Promise<Task>} */
|
||||||
* Stop task timer
|
|
||||||
* @param {string} uuid
|
|
||||||
* @returns {Promise<Task>}
|
|
||||||
*/
|
|
||||||
async stop(uuid) {
|
async stop(uuid) {
|
||||||
return apiRequest(`/tasks/${uuid}/stop`, { method: 'POST' });
|
return apiRequest(`/tasks/${uuid}/stop`, { method: 'POST' });
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse CLI input and create task
|
|
||||||
* @param {string} input - Raw opal CLI syntax
|
* @param {string} input - Raw opal CLI syntax
|
||||||
* @returns {Promise<Task>}
|
* @returns {Promise<{task?: Task} & Task>}
|
||||||
*/
|
*/
|
||||||
async parse(input) {
|
async parse(input) {
|
||||||
return apiRequest('/tasks/parse', {
|
return apiRequest('/tasks/parse', {
|
||||||
@@ -115,29 +85,20 @@ export const tasks = {
|
|||||||
});
|
});
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} reportName @returns {Promise<Task[]>} */
|
||||||
* List tasks by report name
|
|
||||||
* @param {string} reportName
|
|
||||||
* @returns {Promise<Task[]>}
|
|
||||||
*/
|
|
||||||
async listByReport(reportName) {
|
async listByReport(reportName) {
|
||||||
const result = await apiRequest(`/tasks?report=${encodeURIComponent(reportName)}`);
|
const result = await apiRequest(`/tasks?report=${encodeURIComponent(reportName)}`);
|
||||||
return result.tasks ?? result;
|
return result.tasks ?? result;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Tags API
|
|
||||||
export const tags = {
|
export const tags = {
|
||||||
/**
|
/** @returns {Promise<string[]>} */
|
||||||
* List all unique tags
|
|
||||||
* @returns {Promise<string[]>}
|
|
||||||
*/
|
|
||||||
async list() {
|
async list() {
|
||||||
return apiRequest('/tags');
|
return apiRequest('/tags');
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Add tag to task
|
|
||||||
* @param {string} uuid
|
* @param {string} uuid
|
||||||
* @param {string} tag
|
* @param {string} tag
|
||||||
* @returns {Promise<void>}
|
* @returns {Promise<void>}
|
||||||
@@ -150,7 +111,6 @@ export const tags = {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Remove tag from task
|
|
||||||
* @param {string} uuid
|
* @param {string} uuid
|
||||||
* @param {string} tag
|
* @param {string} tag
|
||||||
* @returns {Promise<void>}
|
* @returns {Promise<void>}
|
||||||
@@ -162,21 +122,15 @@ export const tags = {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Projects API
|
|
||||||
export const projects = {
|
export const projects = {
|
||||||
/**
|
/** @returns {Promise<string[]>} */
|
||||||
* List all projects
|
|
||||||
* @returns {Promise<string[]>}
|
|
||||||
*/
|
|
||||||
async list() {
|
async list() {
|
||||||
return apiRequest('/projects');
|
return apiRequest('/projects');
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Sync API
|
|
||||||
export const sync = {
|
export const sync = {
|
||||||
/**
|
/**
|
||||||
* Get changes since timestamp
|
|
||||||
* @param {number} since - Unix timestamp
|
* @param {number} since - Unix timestamp
|
||||||
* @param {string} clientId
|
* @param {string} clientId
|
||||||
* @returns {Promise<any[]>}
|
* @returns {Promise<any[]>}
|
||||||
@@ -189,8 +143,7 @@ export const sync = {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Push local changes to server
|
* @param {Partial<Task>[]} tasks
|
||||||
* @param {Task[]} tasks
|
|
||||||
* @param {string} clientId
|
* @param {string} clientId
|
||||||
* @returns {Promise<{processed: number, conflicts: number}>}
|
* @returns {Promise<{processed: number, conflicts: number}>}
|
||||||
*/
|
*/
|
||||||
@@ -202,12 +155,8 @@ export const sync = {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Auth API
|
|
||||||
export const auth = {
|
export const auth = {
|
||||||
/**
|
/** @returns {Promise<{url: string, state: string}>} */
|
||||||
* Get OAuth login URL
|
|
||||||
* @returns {Promise<{url: string, state: string}>}
|
|
||||||
*/
|
|
||||||
async getLoginUrl() {
|
async getLoginUrl() {
|
||||||
const response = await fetch(`${API_BASE}/auth/login`);
|
const response = await fetch(`${API_BASE}/auth/login`);
|
||||||
const result = await response.json();
|
const result = await response.json();
|
||||||
@@ -218,7 +167,6 @@ export const auth = {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Exchange OAuth code for tokens
|
|
||||||
* @param {string} code
|
* @param {string} code
|
||||||
* @returns {Promise<{access_token: string, refresh_token: string, expires_at: number, token_type: string, user: User}>}
|
* @returns {Promise<{access_token: string, refresh_token: string, expires_at: number, token_type: string, user: User}>}
|
||||||
*/
|
*/
|
||||||
@@ -236,11 +184,7 @@ export const auth = {
|
|||||||
return result.data;
|
return result.data;
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} refreshToken @returns {Promise<void>} */
|
||||||
* Logout (revoke refresh token)
|
|
||||||
* @param {string} refreshToken
|
|
||||||
* @returns {Promise<void>}
|
|
||||||
*/
|
|
||||||
async logout(refreshToken) {
|
async logout(refreshToken) {
|
||||||
return apiRequest('/auth/logout', {
|
return apiRequest('/auth/logout', {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
|
|||||||
@@ -29,7 +29,6 @@
|
|||||||
/** @type {HTMLDivElement|null} */
|
/** @type {HTMLDivElement|null} */
|
||||||
let sheetEl = null;
|
let sheetEl = null;
|
||||||
|
|
||||||
// Body scroll lock — managed in afterUpdate to avoid SSR document access
|
|
||||||
afterUpdate(() => {
|
afterUpdate(() => {
|
||||||
if (!mounted) return;
|
if (!mounted) return;
|
||||||
if (open) {
|
if (open) {
|
||||||
@@ -51,7 +50,6 @@
|
|||||||
e.preventDefault();
|
e.preventDefault();
|
||||||
onClose();
|
onClose();
|
||||||
}
|
}
|
||||||
// Focus trap
|
|
||||||
if (e.key === 'Tab' && sheetEl) {
|
if (e.key === 'Tab' && sheetEl) {
|
||||||
const focusable = sheetEl.querySelectorAll(
|
const focusable = sheetEl.querySelectorAll(
|
||||||
'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])'
|
'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])'
|
||||||
@@ -115,10 +113,8 @@
|
|||||||
|
|
||||||
if (dragging) {
|
if (dragging) {
|
||||||
if (e.cancelable) e.preventDefault();
|
if (e.cancelable) e.preventDefault();
|
||||||
// Only allow dragging down (positive)
|
|
||||||
dragOffset = Math.max(0, deltaY);
|
dragOffset = Math.max(0, deltaY);
|
||||||
|
|
||||||
// Track velocity
|
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
const dt = now - lastTime;
|
const dt = now - lastTime;
|
||||||
if (dt > 0 && lastY !== null) {
|
if (dt > 0 && lastY !== null) {
|
||||||
|
|||||||
@@ -1,11 +1,7 @@
|
|||||||
<script>
|
<script>
|
||||||
import { activeFilter, setFilter } from '$lib/stores/filters.js';
|
import { activeFilter, setFilter } from '$lib/stores/filters.js';
|
||||||
|
|
||||||
/**
|
/** @param {string} token */
|
||||||
* Remove a single token from the active filter string.
|
|
||||||
* If it's the last token, clear the entire filter.
|
|
||||||
* @param {string} token
|
|
||||||
*/
|
|
||||||
function removeToken(token) {
|
function removeToken(token) {
|
||||||
if (!$activeFilter) return;
|
if (!$activeFilter) return;
|
||||||
const tokens = $activeFilter.trim().split(/\s+/);
|
const tokens = $activeFilter.trim().split(/\s+/);
|
||||||
|
|||||||
@@ -19,7 +19,6 @@
|
|||||||
/** @type {FilterModal} */
|
/** @type {FilterModal} */
|
||||||
let filterModal;
|
let filterModal;
|
||||||
|
|
||||||
/** Map backend report names to display labels */
|
|
||||||
const reportLabels = /** @type {Record<string, string>} */ ({
|
const reportLabels = /** @type {Record<string, string>} */ ({
|
||||||
list: 'Pending',
|
list: 'Pending',
|
||||||
next: 'Next',
|
next: 'Next',
|
||||||
|
|||||||
@@ -25,7 +25,6 @@
|
|||||||
const trimmed = value.trim();
|
const trimmed = value.trim();
|
||||||
if (!trimmed || loading) return;
|
if (!trimmed || loading) return;
|
||||||
|
|
||||||
// Merge user input with active filter, deduplicating tokens
|
|
||||||
const merged = mergeInputWithFilter(trimmed, $activeFilter || '');
|
const merged = mergeInputWithFilter(trimmed, $activeFilter || '');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@@ -61,22 +60,17 @@
|
|||||||
}, 150);
|
}, 150);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {string} text */
|
||||||
* Insert text at cursor position
|
|
||||||
* @param {string} text
|
|
||||||
*/
|
|
||||||
function insertAtCursor(text) {
|
function insertAtCursor(text) {
|
||||||
if (!inputEl) return;
|
if (!inputEl) return;
|
||||||
const start = inputEl.selectionStart ?? value.length;
|
const start = inputEl.selectionStart ?? value.length;
|
||||||
const end = inputEl.selectionEnd ?? value.length;
|
const end = inputEl.selectionEnd ?? value.length;
|
||||||
|
|
||||||
// Add leading space if cursor isn't at start and prev char isn't a space
|
|
||||||
const needsSpace = start > 0 && value[start - 1] !== ' ';
|
const needsSpace = start > 0 && value[start - 1] !== ' ';
|
||||||
const insert = (needsSpace ? ' ' : '') + text;
|
const insert = (needsSpace ? ' ' : '') + text;
|
||||||
|
|
||||||
value = value.slice(0, start) + insert + value.slice(end);
|
value = value.slice(0, start) + insert + value.slice(end);
|
||||||
|
|
||||||
// Restore focus and cursor position after the inserted text
|
|
||||||
const newPos = start + insert.length;
|
const newPos = start + insert.length;
|
||||||
requestAnimationFrame(() => {
|
requestAnimationFrame(() => {
|
||||||
if (inputEl) {
|
if (inputEl) {
|
||||||
@@ -86,18 +80,12 @@
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @returns {string} */
|
||||||
* Get the current input value (for PropertyPills smart replace)
|
|
||||||
* @returns {string}
|
|
||||||
*/
|
|
||||||
export function getInputValue() {
|
export function getInputValue() {
|
||||||
return value;
|
return value;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {string} newValue */
|
||||||
* Set the input value (for PropertyPills smart replace)
|
|
||||||
* @param {string} newValue
|
|
||||||
*/
|
|
||||||
export function setInputValue(newValue) {
|
export function setInputValue(newValue) {
|
||||||
value = newValue;
|
value = newValue;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,10 +6,7 @@
|
|||||||
*/
|
*/
|
||||||
export let onInsert;
|
export let onInsert;
|
||||||
|
|
||||||
/** Current input value for smart replace */
|
|
||||||
export let inputValue = '';
|
export let inputValue = '';
|
||||||
|
|
||||||
/** Callback to update the input value when doing smart replace */
|
|
||||||
export let onInputChange = /** @type {(value: string) => void} */ (() => {});
|
export let onInputChange = /** @type {(value: string) => void} */ (() => {});
|
||||||
|
|
||||||
export let visible = false;
|
export let visible = false;
|
||||||
@@ -29,7 +26,6 @@
|
|||||||
* @param {{ text: string, isTag: boolean }} pill
|
* @param {{ text: string, isTag: boolean }} pill
|
||||||
*/
|
*/
|
||||||
function handleInsert(pill) {
|
function handleInsert(pill) {
|
||||||
// Tags are always additive — no smart replace
|
|
||||||
if (!pill.isTag && inputValue) {
|
if (!pill.isTag && inputValue) {
|
||||||
const prefix = pill.text; // e.g. "due:"
|
const prefix = pill.text; // e.g. "due:"
|
||||||
const cleaned = removeTokenByPrefix(inputValue, prefix);
|
const cleaned = removeTokenByPrefix(inputValue, prefix);
|
||||||
|
|||||||
@@ -49,12 +49,10 @@
|
|||||||
const deltaY = touch.clientY - startY;
|
const deltaY = touch.clientY - startY;
|
||||||
|
|
||||||
if (!locked && !swiping) {
|
if (!locked && !swiping) {
|
||||||
// Angle-based lock-in: horizontal must dominate
|
|
||||||
if (Math.abs(deltaX) > 10 && Math.abs(deltaX) > Math.abs(deltaY) * 2) {
|
if (Math.abs(deltaX) > 10 && Math.abs(deltaX) > Math.abs(deltaY) * 2) {
|
||||||
swiping = true;
|
swiping = true;
|
||||||
locked = true;
|
locked = true;
|
||||||
} else if (Math.abs(deltaY) > 10) {
|
} else if (Math.abs(deltaY) > 10) {
|
||||||
// Vertical scroll — abort
|
|
||||||
startX = null;
|
startX = null;
|
||||||
startY = null;
|
startY = null;
|
||||||
return;
|
return;
|
||||||
@@ -74,14 +72,12 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (offsetX >= THRESHOLD) {
|
if (offsetX >= THRESHOLD) {
|
||||||
// Right swipe — complete (row collapses)
|
|
||||||
completed = true;
|
completed = true;
|
||||||
offsetX = window.innerWidth;
|
offsetX = window.innerWidth;
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
onSwipeRight();
|
onSwipeRight();
|
||||||
}, 200);
|
}, 200);
|
||||||
} else if (offsetX <= -THRESHOLD) {
|
} else if (offsetX <= -THRESHOLD) {
|
||||||
// Left swipe — start/stop (row stays)
|
|
||||||
triggered = true;
|
triggered = true;
|
||||||
offsetX = -window.innerWidth;
|
offsetX = -window.innerWidth;
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
|
|||||||
@@ -28,7 +28,6 @@
|
|||||||
/** @type {() => void} */
|
/** @type {() => void} */
|
||||||
export let onClose;
|
export let onClose;
|
||||||
|
|
||||||
// Editing state — only one field at a time
|
|
||||||
/** @type {string|null} */
|
/** @type {string|null} */
|
||||||
let editingField = null;
|
let editingField = null;
|
||||||
|
|
||||||
@@ -37,7 +36,6 @@
|
|||||||
let editProject = '';
|
let editProject = '';
|
||||||
let editTagInput = '';
|
let editTagInput = '';
|
||||||
|
|
||||||
// Recurring instance: remember user choice for this sheet session
|
|
||||||
/** @type {'instance'|'template'|null} */
|
/** @type {'instance'|'template'|null} */
|
||||||
let recurringChoice = null;
|
let recurringChoice = null;
|
||||||
|
|
||||||
@@ -67,17 +65,12 @@
|
|||||||
0: 3
|
0: 3
|
||||||
});
|
});
|
||||||
|
|
||||||
/**
|
/** @param {number} ts @returns {string} */
|
||||||
* Format a unix timestamp as yyyy-MM-dd for date input
|
|
||||||
* @param {number} ts
|
|
||||||
* @returns {string}
|
|
||||||
*/
|
|
||||||
function tsToDateValue(ts) {
|
function tsToDateValue(ts) {
|
||||||
return format(fromUnix(ts), 'yyyy-MM-dd');
|
return format(fromUnix(ts), 'yyyy-MM-dd');
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if a field edit on a recurring instance needs the instance/template prompt
|
|
||||||
* @param {string} field
|
* @param {string} field
|
||||||
* @returns {boolean}
|
* @returns {boolean}
|
||||||
*/
|
*/
|
||||||
@@ -124,14 +117,10 @@
|
|||||||
|
|
||||||
function saveCurrentEdit() {
|
function saveCurrentEdit() {
|
||||||
if (!editingField) return;
|
if (!editingField) return;
|
||||||
// Each field handles its own save via its input events
|
|
||||||
editingField = null;
|
editingField = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @returns {string} */
|
||||||
* Get the UUID to update based on recurring choice
|
|
||||||
* @returns {string}
|
|
||||||
*/
|
|
||||||
function getTargetUuid() {
|
function getTargetUuid() {
|
||||||
if (recurringChoice === 'template' && task.parent_uuid) {
|
if (recurringChoice === 'template' && task.parent_uuid) {
|
||||||
return task.parent_uuid;
|
return task.parent_uuid;
|
||||||
@@ -194,7 +183,6 @@
|
|||||||
async function removeTag(tag) {
|
async function removeTag(tag) {
|
||||||
try {
|
try {
|
||||||
await tagsAPI.remove(getTargetUuid(), tag);
|
await tagsAPI.remove(getTargetUuid(), tag);
|
||||||
// Optimistic: update local
|
|
||||||
task = { ...task, tags: task.tags.filter(t => t !== tag) };
|
task = { ...task, tags: task.tags.filter(t => t !== tag) };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to remove tag:', error);
|
console.error('Failed to remove tag:', error);
|
||||||
@@ -207,7 +195,6 @@
|
|||||||
editTagInput = '';
|
editTagInput = '';
|
||||||
try {
|
try {
|
||||||
await tagsAPI.add(getTargetUuid(), tag);
|
await tagsAPI.add(getTargetUuid(), tag);
|
||||||
// Optimistic: update local
|
|
||||||
task = { ...task, tags: [...task.tags, tag] };
|
task = { ...task, tags: [...task.tags, tag] };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to add tag:', error);
|
console.error('Failed to add tag:', error);
|
||||||
@@ -259,12 +246,10 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// After recurring choice is made, proceed with the pending edit
|
|
||||||
$: if (recurringChoice && pendingEditField) {
|
$: if (recurringChoice && pendingEditField) {
|
||||||
const field = pendingEditField;
|
const field = pendingEditField;
|
||||||
pendingEditField = null;
|
pendingEditField = null;
|
||||||
if (field === 'priority-cycle') {
|
if (field === 'priority-cycle') {
|
||||||
// Direct cycle
|
|
||||||
const next = priorityCycle[task.priority] ?? 1;
|
const next = priorityCycle[task.priority] ?? 1;
|
||||||
onUpdate(getTargetUuid(), { priority: /** @type {import('$lib/api/types.js').TaskPriority} */ (next) });
|
onUpdate(getTargetUuid(), { priority: /** @type {import('$lib/api/types.js').TaskPriority} */ (next) });
|
||||||
} else {
|
} else {
|
||||||
@@ -417,6 +402,12 @@
|
|||||||
</span>
|
</span>
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
|
{:else}
|
||||||
|
<!-- svelte-ignore a11y-click-events-have-key-events a11y-no-static-element-interactions -->
|
||||||
|
<div class="field-row editable" on:click={() => startEdit('scheduled')}>
|
||||||
|
<span class="field-label">Scheduled</span>
|
||||||
|
<span class="field-value clickable">Set...</span>
|
||||||
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
<!-- Wait -->
|
<!-- Wait -->
|
||||||
@@ -440,6 +431,12 @@
|
|||||||
</span>
|
</span>
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
|
{:else}
|
||||||
|
<!-- svelte-ignore a11y-click-events-have-key-events a11y-no-static-element-interactions -->
|
||||||
|
<div class="field-row editable" on:click={() => startEdit('wait')}>
|
||||||
|
<span class="field-label">Wait</span>
|
||||||
|
<span class="field-value clickable">Set...</span>
|
||||||
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
<!-- Until -->
|
<!-- Until -->
|
||||||
@@ -463,6 +460,12 @@
|
|||||||
</span>
|
</span>
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
|
{:else}
|
||||||
|
<!-- svelte-ignore a11y-click-events-have-key-events a11y-no-static-element-interactions -->
|
||||||
|
<div class="field-row editable" on:click={() => startEdit('until')}>
|
||||||
|
<span class="field-label">Until</span>
|
||||||
|
<span class="field-value clickable">Set...</span>
|
||||||
|
</div>
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
<!-- Active since -->
|
<!-- Active since -->
|
||||||
|
|||||||
@@ -1,73 +0,0 @@
|
|||||||
<script>
|
|
||||||
/**
|
|
||||||
* @type {Array<{value: string, label: string}>}
|
|
||||||
*/
|
|
||||||
export let options = [];
|
|
||||||
export let value = '';
|
|
||||||
export let label = '';
|
|
||||||
export let placeholder = 'Select...';
|
|
||||||
export let disabled = false;
|
|
||||||
export let id = '';
|
|
||||||
</script>
|
|
||||||
|
|
||||||
<div class="select-group">
|
|
||||||
{#if label}
|
|
||||||
<label for={id} class="label">{label}</label>
|
|
||||||
{/if}
|
|
||||||
|
|
||||||
<select
|
|
||||||
{id}
|
|
||||||
bind:value
|
|
||||||
{disabled}
|
|
||||||
class="select"
|
|
||||||
on:change
|
|
||||||
>
|
|
||||||
<option value="" disabled selected>{placeholder}</option>
|
|
||||||
{#each options as option}
|
|
||||||
<option value={option.value}>{option.label}</option>
|
|
||||||
{/each}
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<style>
|
|
||||||
.select-group {
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 0.25rem;
|
|
||||||
width: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.label {
|
|
||||||
font-size: var(--font-size-sm);
|
|
||||||
font-weight: 500;
|
|
||||||
color: var(--text-primary);
|
|
||||||
}
|
|
||||||
|
|
||||||
.select {
|
|
||||||
width: 100%;
|
|
||||||
padding: 0.75rem;
|
|
||||||
font-size: var(--font-size-base);
|
|
||||||
font-family: inherit;
|
|
||||||
border: 1px solid var(--border-color);
|
|
||||||
border-radius: var(--border-radius);
|
|
||||||
background-color: var(--bg-primary);
|
|
||||||
color: var(--text-primary);
|
|
||||||
cursor: pointer;
|
|
||||||
appearance: none;
|
|
||||||
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='none' viewBox='0 0 24 24' stroke='%236b7280'%3E%3Cpath stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M19 9l-7 7-7-7'%3E%3C/path%3E%3C/svg%3E");
|
|
||||||
background-repeat: no-repeat;
|
|
||||||
background-position: right 0.5rem center;
|
|
||||||
background-size: 1.5rem;
|
|
||||||
padding-right: 2.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.select:focus {
|
|
||||||
outline: none;
|
|
||||||
border-color: var(--color-primary);
|
|
||||||
}
|
|
||||||
|
|
||||||
.select:disabled {
|
|
||||||
background-color: var(--bg-tertiary);
|
|
||||||
cursor: not-allowed;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
// place files you want to import through the `$lib` alias in this folder.
|
|
||||||
@@ -19,10 +19,16 @@ import { getItem, setItem, removeItem } from '$lib/utils/storage.js';
|
|||||||
const STORAGE_KEY = 'opal_auth';
|
const STORAGE_KEY = 'opal_auth';
|
||||||
const DEV_MODE = import.meta.env.DEV;
|
const DEV_MODE = import.meta.env.DEV;
|
||||||
|
|
||||||
/**
|
/** @type {AuthState} */
|
||||||
* Load auth state from localStorage
|
const EMPTY_STATE = {
|
||||||
* @returns {AuthState}
|
accessToken: null,
|
||||||
*/
|
refreshToken: null,
|
||||||
|
expiresAt: null,
|
||||||
|
user: null,
|
||||||
|
isAuthenticated: false
|
||||||
|
};
|
||||||
|
|
||||||
|
/** @returns {AuthState} */
|
||||||
function loadAuth() {
|
function loadAuth() {
|
||||||
// In dev mode, auto-authenticate with a dev user.
|
// In dev mode, auto-authenticate with a dev user.
|
||||||
// API requests still go to the real backend (which runs with auth disabled).
|
// API requests still go to the real backend (which runs with auth disabled).
|
||||||
@@ -36,21 +42,11 @@ function loadAuth() {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!browser) {
|
if (!browser) return EMPTY_STATE;
|
||||||
return {
|
|
||||||
accessToken: null,
|
|
||||||
refreshToken: null,
|
|
||||||
expiresAt: null,
|
|
||||||
user: null,
|
|
||||||
isAuthenticated: false
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const stored = getItem(STORAGE_KEY);
|
const stored = getItem(STORAGE_KEY);
|
||||||
if (stored) {
|
if (stored) {
|
||||||
// Check if token expired
|
|
||||||
if (stored.expiresAt && stored.expiresAt < Date.now() / 1000) {
|
if (stored.expiresAt && stored.expiresAt < Date.now() / 1000) {
|
||||||
// Token expired - clear
|
|
||||||
removeItem(STORAGE_KEY);
|
removeItem(STORAGE_KEY);
|
||||||
return loadAuth();
|
return loadAuth();
|
||||||
}
|
}
|
||||||
@@ -60,28 +56,21 @@ function loadAuth() {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return EMPTY_STATE;
|
||||||
accessToken: null,
|
|
||||||
refreshToken: null,
|
|
||||||
expiresAt: null,
|
|
||||||
user: null,
|
|
||||||
isAuthenticated: false
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Create auth store
|
|
||||||
*/
|
|
||||||
function createAuthStore() {
|
function createAuthStore() {
|
||||||
const { subscribe, set, update } = writable(loadAuth());
|
const { subscribe, set, update } = writable(loadAuth());
|
||||||
|
|
||||||
|
/** Persist state to localStorage */
|
||||||
|
function persist(/** @type {AuthState} */ state) {
|
||||||
|
if (browser) setItem(STORAGE_KEY, state);
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
subscribe,
|
subscribe,
|
||||||
|
|
||||||
/**
|
/** @param {AuthTokens} tokens */
|
||||||
* Set authentication tokens
|
|
||||||
* @param {AuthTokens} tokens
|
|
||||||
*/
|
|
||||||
setTokens(tokens) {
|
setTokens(tokens) {
|
||||||
update(state => {
|
update(state => {
|
||||||
const newState = {
|
const newState = {
|
||||||
@@ -91,33 +80,21 @@ function createAuthStore() {
|
|||||||
expiresAt: tokens.expires_at,
|
expiresAt: tokens.expires_at,
|
||||||
isAuthenticated: true
|
isAuthenticated: true
|
||||||
};
|
};
|
||||||
|
persist(newState);
|
||||||
if (browser) {
|
|
||||||
setItem(STORAGE_KEY, newState);
|
|
||||||
}
|
|
||||||
|
|
||||||
return newState;
|
return newState;
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {User} user */
|
||||||
* Set user info
|
|
||||||
* @param {User} user
|
|
||||||
*/
|
|
||||||
setUser(user) {
|
setUser(user) {
|
||||||
update(state => {
|
update(state => {
|
||||||
const newState = { ...state, user };
|
const newState = { ...state, user };
|
||||||
if (browser) {
|
persist(newState);
|
||||||
setItem(STORAGE_KEY, newState);
|
|
||||||
}
|
|
||||||
return newState;
|
return newState;
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {AuthTokens & {user: User}} data */
|
||||||
* Set full auth data (tokens + user)
|
|
||||||
* @param {AuthTokens & {user: User}} data
|
|
||||||
*/
|
|
||||||
setAuth(data) {
|
setAuth(data) {
|
||||||
const newState = {
|
const newState = {
|
||||||
accessToken: data.access_token,
|
accessToken: data.access_token,
|
||||||
@@ -126,28 +103,13 @@ function createAuthStore() {
|
|||||||
user: data.user,
|
user: data.user,
|
||||||
isAuthenticated: true
|
isAuthenticated: true
|
||||||
};
|
};
|
||||||
|
persist(newState);
|
||||||
if (browser) {
|
|
||||||
setItem(STORAGE_KEY, newState);
|
|
||||||
}
|
|
||||||
|
|
||||||
set(newState);
|
set(newState);
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
|
||||||
* Clear auth (logout)
|
|
||||||
*/
|
|
||||||
clear() {
|
clear() {
|
||||||
if (browser) {
|
if (browser) removeItem(STORAGE_KEY);
|
||||||
removeItem(STORAGE_KEY);
|
set(EMPTY_STATE);
|
||||||
}
|
|
||||||
set({
|
|
||||||
accessToken: null,
|
|
||||||
refreshToken: null,
|
|
||||||
expiresAt: null,
|
|
||||||
user: null,
|
|
||||||
isAuthenticated: false
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,7 +6,6 @@ const RECENT_KEY = 'opal_recent_filters';
|
|||||||
const MAX_RECENT = 8;
|
const MAX_RECENT = 8;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Create a localStorage-backed writable store
|
|
||||||
* @template T
|
* @template T
|
||||||
* @param {string} key
|
* @param {string} key
|
||||||
* @param {T} fallback
|
* @param {T} fallback
|
||||||
@@ -21,10 +20,7 @@ function persisted(key, fallback) {
|
|||||||
export const activeFilter = persisted(ACTIVE_KEY, /** @type {string|null} */ (null));
|
export const activeFilter = persisted(ACTIVE_KEY, /** @type {string|null} */ (null));
|
||||||
export const recentFilters = persisted(RECENT_KEY, /** @type {string[]} */ ([]));
|
export const recentFilters = persisted(RECENT_KEY, /** @type {string[]} */ ([]));
|
||||||
|
|
||||||
/**
|
/** @param {string} str */
|
||||||
* Set the active filter and add it to recents
|
|
||||||
* @param {string} str
|
|
||||||
*/
|
|
||||||
export function setFilter(str) {
|
export function setFilter(str) {
|
||||||
const trimmed = str.trim();
|
const trimmed = str.trim();
|
||||||
if (!trimmed) {
|
if (!trimmed) {
|
||||||
@@ -39,17 +35,11 @@ export function setFilter(str) {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Clear the active filter
|
|
||||||
*/
|
|
||||||
export function clearFilter() {
|
export function clearFilter() {
|
||||||
activeFilter.set(null);
|
activeFilter.set(null);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {string} str */
|
||||||
* Remove a specific entry from recents
|
|
||||||
* @param {string} str
|
|
||||||
*/
|
|
||||||
export function removeRecent(str) {
|
export function removeRecent(str) {
|
||||||
recentFilters.update(recents => recents.filter(r => r !== str));
|
recentFilters.update(recents => recents.filter(r => r !== str));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,10 +20,7 @@ import { generateUUID } from '$lib/utils/uuid.js';
|
|||||||
const SYNC_STATE_KEY = 'opal_sync_state';
|
const SYNC_STATE_KEY = 'opal_sync_state';
|
||||||
const CLIENT_ID_KEY = 'opal_client_id';
|
const CLIENT_ID_KEY = 'opal_client_id';
|
||||||
|
|
||||||
/**
|
/** @returns {string} */
|
||||||
* Get or create client ID
|
|
||||||
* @returns {string}
|
|
||||||
*/
|
|
||||||
function getClientId() {
|
function getClientId() {
|
||||||
let clientId = getItem(CLIENT_ID_KEY);
|
let clientId = getItem(CLIENT_ID_KEY);
|
||||||
if (!clientId) {
|
if (!clientId) {
|
||||||
@@ -33,10 +30,7 @@ function getClientId() {
|
|||||||
return clientId;
|
return clientId;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @returns {SyncState} */
|
||||||
* Load sync state
|
|
||||||
* @returns {SyncState}
|
|
||||||
*/
|
|
||||||
function loadSyncState() {
|
function loadSyncState() {
|
||||||
const stored = getItem(SYNC_STATE_KEY);
|
const stored = getItem(SYNC_STATE_KEY);
|
||||||
return {
|
return {
|
||||||
@@ -48,19 +42,13 @@ function loadSyncState() {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Create sync store
|
|
||||||
*/
|
|
||||||
function createSyncStore() {
|
function createSyncStore() {
|
||||||
const { subscribe, set, update } = writable(loadSyncState());
|
const { subscribe, set, update } = writable(loadSyncState());
|
||||||
|
|
||||||
return {
|
return {
|
||||||
subscribe,
|
subscribe,
|
||||||
|
|
||||||
/**
|
/** @returns {Promise<SyncResult>} */
|
||||||
* Perform sync
|
|
||||||
* @returns {Promise<SyncResult>}
|
|
||||||
*/
|
|
||||||
async sync() {
|
async sync() {
|
||||||
update(state => ({ ...state, status: 'syncing', error: null }));
|
update(state => ({ ...state, status: 'syncing', error: null }));
|
||||||
|
|
||||||
@@ -68,7 +56,8 @@ function createSyncStore() {
|
|||||||
const state = loadSyncState();
|
const state = loadSyncState();
|
||||||
const queue = getQueue();
|
const queue = getQueue();
|
||||||
|
|
||||||
let result = {
|
/** @type {SyncResult} */
|
||||||
|
const result = {
|
||||||
pulled: 0,
|
pulled: 0,
|
||||||
pushed: 0,
|
pushed: 0,
|
||||||
conflicts_resolved: 0,
|
conflicts_resolved: 0,
|
||||||
@@ -76,28 +65,25 @@ function createSyncStore() {
|
|||||||
errors: []
|
errors: []
|
||||||
};
|
};
|
||||||
|
|
||||||
// Push queued changes
|
|
||||||
if (queue.length > 0) {
|
if (queue.length > 0) {
|
||||||
const tasks = queue.map(q => q.data);
|
const tasks = queue.map(q => q.data);
|
||||||
try {
|
try {
|
||||||
await syncAPI.push(tasks, state.clientId);
|
await syncAPI.push(tasks, state.clientId);
|
||||||
clearQueue();
|
clearQueue();
|
||||||
result.pushed = queue.length;
|
result.pushed = queue.length;
|
||||||
} catch (error) {
|
} catch (/** @type {any} */ error) {
|
||||||
result.errors.push(`Failed to push queue: ${error.message}`);
|
result.errors.push(`Failed to push queue: ${error.message}`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pull changes from server
|
|
||||||
try {
|
try {
|
||||||
const changes = await syncAPI.getChanges(state.lastSync, state.clientId);
|
const changes = await syncAPI.getChanges(state.lastSync, state.clientId);
|
||||||
result.pulled = changes.length;
|
result.pulled = changes.length;
|
||||||
// TODO: Apply changes to local state
|
// TODO: Apply changes to local state
|
||||||
} catch (error) {
|
} catch (/** @type {any} */ error) {
|
||||||
result.errors.push(`Failed to pull changes: ${error.message}`);
|
result.errors.push(`Failed to pull changes: ${error.message}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update sync state
|
|
||||||
const now = Math.floor(Date.now() / 1000);
|
const now = Math.floor(Date.now() / 1000);
|
||||||
setItem(SYNC_STATE_KEY, { lastSync: now });
|
setItem(SYNC_STATE_KEY, { lastSync: now });
|
||||||
|
|
||||||
@@ -110,7 +96,7 @@ function createSyncStore() {
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
} catch (error) {
|
} catch (/** @type {any} */ error) {
|
||||||
update(state => ({
|
update(state => ({
|
||||||
...state,
|
...state,
|
||||||
status: 'error',
|
status: 'error',
|
||||||
@@ -120,9 +106,6 @@ function createSyncStore() {
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
|
||||||
* Update queue size
|
|
||||||
*/
|
|
||||||
updateQueueSize() {
|
updateQueueSize() {
|
||||||
update(state => ({
|
update(state => ({
|
||||||
...state,
|
...state,
|
||||||
|
|||||||
@@ -7,17 +7,22 @@ import { queueChange } from '$lib/utils/sync-queue.js';
|
|||||||
* @typedef {import('$lib/api/types.js').TaskFilters} TaskFilters
|
* @typedef {import('$lib/api/types.js').TaskFilters} TaskFilters
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/**
|
|
||||||
* Create tasks store
|
|
||||||
*/
|
|
||||||
function createTasksStore() {
|
function createTasksStore() {
|
||||||
const { subscribe, set, update } = writable(/** @type {Task[]} */ ([]));
|
const { subscribe, set, update } = writable(/** @type {Task[]} */ ([]));
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Replace a single task in the array by UUID.
|
||||||
|
* @param {string} uuid
|
||||||
|
* @param {(task: Task) => Task} fn
|
||||||
|
*/
|
||||||
|
function updateByUuid(uuid, fn) {
|
||||||
|
update(tasks => tasks.map(t => t.uuid === uuid ? fn(t) : t));
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
subscribe,
|
subscribe,
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Load tasks by report name
|
|
||||||
* @param {string} reportName - Backend report name (e.g. 'list', 'next', 'completed')
|
* @param {string} reportName - Backend report name (e.g. 'list', 'next', 'completed')
|
||||||
*/
|
*/
|
||||||
async loadReport(reportName) {
|
async loadReport(reportName) {
|
||||||
@@ -31,14 +36,13 @@ function createTasksStore() {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse CLI input and create a task
|
|
||||||
* @param {string} input - Raw opal CLI syntax
|
* @param {string} input - Raw opal CLI syntax
|
||||||
* @returns {Promise<Task>}
|
* @returns {Promise<Task>}
|
||||||
*/
|
*/
|
||||||
async parseAndCreate(input) {
|
async parseAndCreate(input) {
|
||||||
try {
|
try {
|
||||||
const result = await tasksAPI.parse(input);
|
const result = await tasksAPI.parse(input);
|
||||||
const created = result.task ?? result;
|
const created = /** @type {Task} */ (result.task ?? result);
|
||||||
update(tasks => [created, ...tasks]);
|
update(tasks => [created, ...tasks]);
|
||||||
return created;
|
return created;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -47,10 +51,7 @@ function createTasksStore() {
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {TaskFilters} [filters] */
|
||||||
* Load all tasks from API
|
|
||||||
* @param {TaskFilters} [filters]
|
|
||||||
*/
|
|
||||||
async load(filters = {}) {
|
async load(filters = {}) {
|
||||||
try {
|
try {
|
||||||
const tasks = await tasksAPI.list(filters);
|
const tasks = await tasksAPI.list(filters);
|
||||||
@@ -62,8 +63,8 @@ function createTasksStore() {
|
|||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Add new task (optimistic update)
|
* Optimistic create — queues offline on failure.
|
||||||
* @param {Partial<Task>} task
|
* @param {Task} task
|
||||||
*/
|
*/
|
||||||
async add(task) {
|
async add(task) {
|
||||||
try {
|
try {
|
||||||
@@ -71,101 +72,55 @@ function createTasksStore() {
|
|||||||
update(tasks => [...tasks, created]);
|
update(tasks => [...tasks, created]);
|
||||||
return created;
|
return created;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Queue for offline sync
|
queueChange({ type: 'create', task_uuid: task.uuid, data: task });
|
||||||
queueChange({
|
|
||||||
type: 'create',
|
|
||||||
task_uuid: task.uuid,
|
|
||||||
data: task
|
|
||||||
});
|
|
||||||
|
|
||||||
// Still update UI optimistically
|
|
||||||
update(tasks => [...tasks, task]);
|
update(tasks => [...tasks, task]);
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update task (optimistic update)
|
* Optimistic update — queues offline on failure.
|
||||||
* @param {string} uuid
|
* @param {string} uuid
|
||||||
* @param {Partial<Task>} updates
|
* @param {Partial<Task>} updates
|
||||||
*/
|
*/
|
||||||
async updateTask(uuid, updates) {
|
async updateTask(uuid, updates) {
|
||||||
// Optimistic update
|
updateByUuid(uuid, t => ({ ...t, ...updates, modified: Date.now() / 1000 }));
|
||||||
update(tasks => {
|
|
||||||
const index = tasks.findIndex(t => t.uuid === uuid);
|
|
||||||
if (index >= 0) {
|
|
||||||
tasks[index] = { ...tasks[index], ...updates, modified: Date.now() / 1000 };
|
|
||||||
}
|
|
||||||
return tasks;
|
|
||||||
});
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const updated = await tasksAPI.update(uuid, updates);
|
const updated = await tasksAPI.update(uuid, updates);
|
||||||
// Sync with server response
|
updateByUuid(uuid, () => updated);
|
||||||
update(tasks => {
|
|
||||||
const index = tasks.findIndex(t => t.uuid === uuid);
|
|
||||||
if (index >= 0) {
|
|
||||||
tasks[index] = updated;
|
|
||||||
}
|
|
||||||
return tasks;
|
|
||||||
});
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Queue for offline sync
|
queueChange({ type: 'update', task_uuid: uuid, data: updates });
|
||||||
queueChange({
|
|
||||||
type: 'update',
|
|
||||||
task_uuid: uuid,
|
|
||||||
data: updates
|
|
||||||
});
|
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid */
|
||||||
* Delete task
|
|
||||||
* @param {string} uuid
|
|
||||||
*/
|
|
||||||
async deleteTask(uuid) {
|
async deleteTask(uuid) {
|
||||||
// Optimistic removal
|
|
||||||
update(tasks => tasks.filter(t => t.uuid !== uuid));
|
update(tasks => tasks.filter(t => t.uuid !== uuid));
|
||||||
|
|
||||||
try {
|
try {
|
||||||
await tasksAPI.delete(uuid);
|
await tasksAPI.delete(uuid);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
queueChange({
|
queueChange({ type: 'delete', task_uuid: uuid, data: {} });
|
||||||
type: 'delete',
|
|
||||||
task_uuid: uuid,
|
|
||||||
data: {}
|
|
||||||
});
|
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid */
|
||||||
* Start task timer (optimistic)
|
|
||||||
* @param {string} uuid
|
|
||||||
*/
|
|
||||||
async startTask(uuid) {
|
async startTask(uuid) {
|
||||||
const now = Math.floor(Date.now() / 1000);
|
const now = Math.floor(Date.now() / 1000);
|
||||||
update(tasks => tasks.map(t =>
|
updateByUuid(uuid, t => ({ ...t, start: now }));
|
||||||
t.uuid === uuid ? { ...t, start: now } : t
|
|
||||||
));
|
|
||||||
try {
|
try {
|
||||||
const updated = await tasksAPI.start(uuid);
|
const updated = await tasksAPI.start(uuid);
|
||||||
update(tasks => tasks.map(t =>
|
updateByUuid(uuid, () => updated);
|
||||||
t.uuid === uuid ? updated : t
|
|
||||||
));
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
update(tasks => tasks.map(t =>
|
updateByUuid(uuid, t => ({ ...t, start: null }));
|
||||||
t.uuid === uuid ? { ...t, start: null } : t
|
|
||||||
));
|
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid */
|
||||||
* Stop task timer (optimistic)
|
|
||||||
* @param {string} uuid
|
|
||||||
*/
|
|
||||||
async stopTask(uuid) {
|
async stopTask(uuid) {
|
||||||
/** @type {number|null} */
|
/** @type {number|null} */
|
||||||
let prevStart = null;
|
let prevStart = null;
|
||||||
@@ -178,21 +133,14 @@ function createTasksStore() {
|
|||||||
}));
|
}));
|
||||||
try {
|
try {
|
||||||
const updated = await tasksAPI.stop(uuid);
|
const updated = await tasksAPI.stop(uuid);
|
||||||
update(tasks => tasks.map(t =>
|
updateByUuid(uuid, () => updated);
|
||||||
t.uuid === uuid ? updated : t
|
|
||||||
));
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
update(tasks => tasks.map(t =>
|
updateByUuid(uuid, t => ({ ...t, start: prevStart }));
|
||||||
t.uuid === uuid ? { ...t, start: prevStart } : t
|
|
||||||
));
|
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
/**
|
/** @param {string} uuid */
|
||||||
* Complete task
|
|
||||||
* @param {string} uuid
|
|
||||||
*/
|
|
||||||
async complete(uuid) {
|
async complete(uuid) {
|
||||||
try {
|
try {
|
||||||
await tasksAPI.complete(uuid);
|
await tasksAPI.complete(uuid);
|
||||||
@@ -211,7 +159,6 @@ function createTasksStore() {
|
|||||||
|
|
||||||
export const tasksStore = createTasksStore();
|
export const tasksStore = createTasksStore();
|
||||||
|
|
||||||
// Derived stores for filtered views
|
|
||||||
export const pendingTasks = derived(
|
export const pendingTasks = derived(
|
||||||
tasksStore,
|
tasksStore,
|
||||||
$tasks => $tasks.filter(t => t.status === 'P')
|
$tasks => $tasks.filter(t => t.status === 'P')
|
||||||
@@ -225,6 +172,7 @@ export const completedTasks = derived(
|
|||||||
export const tasksByProject = derived(
|
export const tasksByProject = derived(
|
||||||
tasksStore,
|
tasksStore,
|
||||||
$tasks => {
|
$tasks => {
|
||||||
|
/** @type {Record<string, Task[]>} */
|
||||||
const grouped = {};
|
const grouped = {};
|
||||||
$tasks.forEach(task => {
|
$tasks.forEach(task => {
|
||||||
const project = task.project || 'No Project';
|
const project = task.project || 'No Project';
|
||||||
|
|||||||
@@ -11,10 +11,7 @@ const DEFAULT_THEME = 'obsidian';
|
|||||||
/** @type {ThemeName[]} */
|
/** @type {ThemeName[]} */
|
||||||
export const THEMES = ['obsidian', 'paper', 'midnight'];
|
export const THEMES = ['obsidian', 'paper', 'midnight'];
|
||||||
|
|
||||||
/**
|
/** @returns {ThemeName} */
|
||||||
* Read stored theme, falling back to default
|
|
||||||
* @returns {ThemeName}
|
|
||||||
*/
|
|
||||||
function getInitial() {
|
function getInitial() {
|
||||||
if (!browser) return DEFAULT_THEME;
|
if (!browser) return DEFAULT_THEME;
|
||||||
const stored = localStorage.getItem(STORAGE_KEY);
|
const stored = localStorage.getItem(STORAGE_KEY);
|
||||||
@@ -27,7 +24,6 @@ function getInitial() {
|
|||||||
function createThemeStore() {
|
function createThemeStore() {
|
||||||
const { subscribe, set, update } = writable(getInitial());
|
const { subscribe, set, update } = writable(getInitial());
|
||||||
|
|
||||||
/** Apply theme to the document */
|
|
||||||
function apply(/** @type {ThemeName} */ theme) {
|
function apply(/** @type {ThemeName} */ theme) {
|
||||||
if (browser) {
|
if (browser) {
|
||||||
document.documentElement.dataset.theme = theme;
|
document.documentElement.dataset.theme = theme;
|
||||||
@@ -35,7 +31,6 @@ function createThemeStore() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply on every change
|
|
||||||
subscribe(apply);
|
subscribe(apply);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
@@ -44,7 +39,6 @@ function createThemeStore() {
|
|||||||
set(theme) {
|
set(theme) {
|
||||||
set(theme);
|
set(theme);
|
||||||
},
|
},
|
||||||
/** Cycle to the next theme */
|
|
||||||
cycle() {
|
cycle() {
|
||||||
update(current => {
|
update(current => {
|
||||||
const idx = THEMES.indexOf(current);
|
const idx = THEMES.indexOf(current);
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import { format, formatDistance, isToday, isTomorrow, isPast } from 'date-fns';
|
import { format, formatDistance, isToday, isTomorrow, isPast } from 'date-fns';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Format Unix timestamp to readable date
|
|
||||||
* @param {number|null} timestamp - Unix timestamp (seconds)
|
* @param {number|null} timestamp - Unix timestamp (seconds)
|
||||||
* @param {string} formatStr - date-fns format string
|
* @param {string} formatStr - date-fns format string
|
||||||
* @returns {string}
|
* @returns {string}
|
||||||
@@ -12,8 +11,7 @@ export function formatDate(timestamp, formatStr = 'MMM d, yyyy') {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Format Unix timestamp to relative time
|
* @param {number|null} timestamp - Unix timestamp (seconds)
|
||||||
* @param {number|null} timestamp
|
|
||||||
* @returns {string}
|
* @returns {string}
|
||||||
*/
|
*/
|
||||||
export function formatRelative(timestamp) {
|
export function formatRelative(timestamp) {
|
||||||
@@ -27,8 +25,7 @@ export function formatRelative(timestamp) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if timestamp is overdue
|
* @param {number|null} timestamp - Unix timestamp (seconds)
|
||||||
* @param {number|null} timestamp
|
|
||||||
* @returns {boolean}
|
* @returns {boolean}
|
||||||
*/
|
*/
|
||||||
export function isOverdue(timestamp) {
|
export function isOverdue(timestamp) {
|
||||||
@@ -36,20 +33,12 @@ export function isOverdue(timestamp) {
|
|||||||
return isPast(new Date(timestamp * 1000));
|
return isPast(new Date(timestamp * 1000));
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {Date} date @returns {number} */
|
||||||
* Convert Date object to Unix timestamp
|
|
||||||
* @param {Date} date
|
|
||||||
* @returns {number}
|
|
||||||
*/
|
|
||||||
export function toUnix(date) {
|
export function toUnix(date) {
|
||||||
return Math.floor(date.getTime() / 1000);
|
return Math.floor(date.getTime() / 1000);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {number} timestamp @returns {Date} */
|
||||||
* Convert Unix timestamp to Date object
|
|
||||||
* @param {number} timestamp
|
|
||||||
* @returns {Date}
|
|
||||||
*/
|
|
||||||
export function fromUnix(timestamp) {
|
export function fromUnix(timestamp) {
|
||||||
return new Date(timestamp * 1000);
|
return new Date(timestamp * 1000);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,4 @@
|
|||||||
/**
|
/** Valid filter attribute keys that work as query filters in the engine */
|
||||||
* Valid filter attribute keys (these actually work as query filters in the engine)
|
|
||||||
*/
|
|
||||||
const FILTER_ATTRS = new Set(['status', 'project', 'priority']);
|
const FILTER_ATTRS = new Set(['status', 'project', 'priority']);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -13,7 +11,6 @@ const FILTER_ATTRS = new Set(['status', 'project', 'priority']);
|
|||||||
/**
|
/**
|
||||||
* Parse a filter string into structured tokens.
|
* Parse a filter string into structured tokens.
|
||||||
* Recognizes +tag, -tag, and key:value for supported filter attributes.
|
* Recognizes +tag, -tag, and key:value for supported filter attributes.
|
||||||
* Unknown tokens (like due:3d) are preserved as raw tokens for pass-through.
|
|
||||||
* @param {string} str
|
* @param {string} str
|
||||||
* @returns {ParsedFilter}
|
* @returns {ParsedFilter}
|
||||||
*/
|
*/
|
||||||
@@ -41,7 +38,6 @@ export function parseFilterString(str) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Convert a parsed filter to TaskFilters for the API
|
|
||||||
* @param {ParsedFilter} parsed
|
* @param {ParsedFilter} parsed
|
||||||
* @returns {import('$lib/api/types.js').TaskFilters}
|
* @returns {import('$lib/api/types.js').TaskFilters}
|
||||||
*/
|
*/
|
||||||
@@ -58,8 +54,7 @@ export function filterToParams(parsed) {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Remove a token matching a given prefix from a string.
|
* Remove a token matching a given prefix from a string.
|
||||||
* Used by PropertyPills smart replace: e.g. removeTokenByPrefix("buy milk due:tomorrow", "due:")
|
* e.g. removeTokenByPrefix("buy milk due:tomorrow", "due:") → "buy milk"
|
||||||
* returns "buy milk"
|
|
||||||
* @param {string} input
|
* @param {string} input
|
||||||
* @param {string} prefix
|
* @param {string} prefix
|
||||||
* @returns {string}
|
* @returns {string}
|
||||||
@@ -72,7 +67,6 @@ export function removeTokenByPrefix(input, prefix) {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Deduplicate filter tokens from user input that are already in the active filter.
|
* Deduplicate filter tokens from user input that are already in the active filter.
|
||||||
* Prevents submitting "+grocer +grocer" when filter is +grocer and user also typed +grocer.
|
|
||||||
* @param {string} userInput
|
* @param {string} userInput
|
||||||
* @param {string} filterStr
|
* @param {string} filterStr
|
||||||
* @returns {string}
|
* @returns {string}
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import { browser } from '$app/environment';
|
import { browser } from '$app/environment';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get item from localStorage
|
|
||||||
* @param {string} key
|
* @param {string} key
|
||||||
* @returns {any}
|
* @returns {any}
|
||||||
*/
|
*/
|
||||||
@@ -18,7 +17,6 @@ export function getItem(key) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Set item in localStorage
|
|
||||||
* @param {string} key
|
* @param {string} key
|
||||||
* @param {any} value
|
* @param {any} value
|
||||||
*/
|
*/
|
||||||
@@ -32,10 +30,7 @@ export function setItem(key, value) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {string} key */
|
||||||
* Remove item from localStorage
|
|
||||||
* @param {string} key
|
|
||||||
*/
|
|
||||||
export function removeItem(key) {
|
export function removeItem(key) {
|
||||||
if (!browser) return;
|
if (!browser) return;
|
||||||
|
|
||||||
@@ -46,9 +41,6 @@ export function removeItem(key) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Clear all items
|
|
||||||
*/
|
|
||||||
export function clear() {
|
export function clear() {
|
||||||
if (!browser) return;
|
if (!browser) return;
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,11 @@
|
|||||||
import { getItem, setItem } from './storage.js';
|
import { getItem, setItem } from './storage.js';
|
||||||
import { generateUUID } from './uuid.js';
|
import { generateUUID } from './uuid.js';
|
||||||
|
|
||||||
/**
|
/** @typedef {import('$lib/api/types.js').QueuedChange} QueuedChange */
|
||||||
* @typedef {import('$lib/api/types.js').QueuedChange} QueuedChange
|
|
||||||
*/
|
|
||||||
|
|
||||||
const QUEUE_KEY = 'opal_sync_queue';
|
const QUEUE_KEY = 'opal_sync_queue';
|
||||||
|
|
||||||
/**
|
/** @param {Omit<QueuedChange, 'id'|'timestamp'>} change */
|
||||||
* Add change to sync queue
|
|
||||||
* @param {Omit<QueuedChange, 'id'|'timestamp'>} change
|
|
||||||
*/
|
|
||||||
export function queueChange(change) {
|
export function queueChange(change) {
|
||||||
const queue = getQueue();
|
const queue = getQueue();
|
||||||
|
|
||||||
@@ -23,33 +18,16 @@ export function queueChange(change) {
|
|||||||
setItem(QUEUE_KEY, queue);
|
setItem(QUEUE_KEY, queue);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @returns {QueuedChange[]} */
|
||||||
* Get all queued changes
|
|
||||||
* @returns {QueuedChange[]}
|
|
||||||
*/
|
|
||||||
export function getQueue() {
|
export function getQueue() {
|
||||||
return getItem(QUEUE_KEY) || [];
|
return getItem(QUEUE_KEY) || [];
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Clear sync queue
|
|
||||||
*/
|
|
||||||
export function clearQueue() {
|
export function clearQueue() {
|
||||||
setItem(QUEUE_KEY, []);
|
setItem(QUEUE_KEY, []);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/** @param {string} id */
|
||||||
* Get queue size
|
|
||||||
* @returns {number}
|
|
||||||
*/
|
|
||||||
export function getQueueSize() {
|
|
||||||
return getQueue().length;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Remove specific change from queue
|
|
||||||
* @param {string} id
|
|
||||||
*/
|
|
||||||
export function removeFromQueue(id) {
|
export function removeFromQueue(id) {
|
||||||
const queue = getQueue().filter((change) => change.id !== id);
|
const queue = getQueue().filter((change) => change.id !== id);
|
||||||
setItem(QUEUE_KEY, queue);
|
setItem(QUEUE_KEY, queue);
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
/**
|
/** @returns {string} */
|
||||||
* Generate UUID v4
|
|
||||||
* @returns {string}
|
|
||||||
*/
|
|
||||||
export function generateUUID() {
|
export function generateUUID() {
|
||||||
|
if (typeof crypto !== 'undefined' && crypto.randomUUID) {
|
||||||
|
return crypto.randomUUID();
|
||||||
|
}
|
||||||
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, (c) => {
|
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, (c) => {
|
||||||
const r = (Math.random() * 16) | 0;
|
const r = (Math.random() * 16) | 0;
|
||||||
const v = c === 'x' ? r : (r & 0x3) | 0x8;
|
const v = c === 'x' ? r : (r & 0x3) | 0x8;
|
||||||
|
|||||||
@@ -37,6 +37,11 @@
|
|||||||
// Subscribe to store
|
// Subscribe to store
|
||||||
const unsubscribe = tasksStore.subscribe(value => {
|
const unsubscribe = tasksStore.subscribe(value => {
|
||||||
tasks = value;
|
tasks = value;
|
||||||
|
// Keep selectedTask in sync with store changes
|
||||||
|
if (selectedTask) {
|
||||||
|
const updated = value.find(t => t.uuid === selectedTask.uuid);
|
||||||
|
if (updated) selectedTask = updated;
|
||||||
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
onMount(() => {
|
onMount(() => {
|
||||||
@@ -197,10 +202,6 @@
|
|||||||
async function handleUpdate(uuid, updates) {
|
async function handleUpdate(uuid, updates) {
|
||||||
try {
|
try {
|
||||||
await tasksStore.updateTask(uuid, updates);
|
await tasksStore.updateTask(uuid, updates);
|
||||||
// Keep selectedTask fresh
|
|
||||||
if (selectedTask?.uuid === uuid) {
|
|
||||||
selectedTask = { ...selectedTask, ...updates };
|
|
||||||
}
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to update task:', error);
|
console.error('Failed to update task:', error);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,9 +11,6 @@
|
|||||||
let saving = false;
|
let saving = false;
|
||||||
let error = '';
|
let error = '';
|
||||||
|
|
||||||
/**
|
|
||||||
* Save API key as manual auth
|
|
||||||
*/
|
|
||||||
async function saveApiKey() {
|
async function saveApiKey() {
|
||||||
if (!apiKey.trim()) {
|
if (!apiKey.trim()) {
|
||||||
error = 'API key is required';
|
error = 'API key is required';
|
||||||
@@ -24,7 +21,6 @@
|
|||||||
error = '';
|
error = '';
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Store API key as access token (for manual auth mode)
|
|
||||||
authStore.setAuth({
|
authStore.setAuth({
|
||||||
access_token: apiKey,
|
access_token: apiKey,
|
||||||
refresh_token: '',
|
refresh_token: '',
|
||||||
@@ -45,9 +41,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Logout
|
|
||||||
*/
|
|
||||||
async function logout() {
|
async function logout() {
|
||||||
if ($authStore.refreshToken) {
|
if ($authStore.refreshToken) {
|
||||||
try {
|
try {
|
||||||
@@ -61,9 +54,6 @@
|
|||||||
goto('/auth/login');
|
goto('/auth/login');
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Trigger manual sync
|
|
||||||
*/
|
|
||||||
async function triggerSync() {
|
async function triggerSync() {
|
||||||
try {
|
try {
|
||||||
await syncStore.sync();
|
await syncStore.sync();
|
||||||
|
|||||||
Reference in New Issue
Block a user