Commit Graph

3652 Commits

Author SHA1 Message Date
Sline
413f29e22a fix: linux theme sync (#5273) 2025-11-01 19:24:47 +08:00
Tunglies
ae319279ae chore(deps): update cc, clash_verge_logger, and version-compare to latest versions 2025-11-01 10:15:12 +08:00
Tunglies
c0e111e756 fix: resolve macOS lightweight mode exit synchronization issues and improve logging levels #5241 2025-11-01 10:09:51 +08:00
Slinetrac
52545a626c chore(lint): enforce no warnings in pre hooks 2025-11-01 09:49:52 +08:00
Tunglies
518875acde refactor: update draft handling and improve benchmark structure 2025-10-31 23:31:04 +08:00
renovate[bot]
b672dd7055 chore(deps): update dependency sass to ^1.93.3 (#5265)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-31 22:54:52 +08:00
renovate[bot]
804641425b chore(deps): update dependency vitest to ^4.0.6 (#5264)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-31 22:51:12 +08:00
Tunglies
d3386908ff fix: improve caching strategy for autobuild jobs 2025-10-31 19:48:36 +08:00
Slinetrac
59e7095b0f chore: up sysproxy git hash 2025-10-31 19:20:07 +08:00
Tunglies
8fc8eb1789 chore: add acknowledgments for contributors in update log 2025-10-31 18:15:46 +08:00
Slinetrac
0f1537ef48 chore: up Cargo.lock 2025-10-31 17:36:33 +08:00
oomeow
8c734a5a35 fix: disable tun mode menu on tray when tun mode is unavailable (#4975)
* fix: check if service installed when toggle tun mode on tray

* chore: cargo fmt

* fix: auto disable tun mode

* docs: update UPDATELOG.md

* fix: init Tun mode status

* chore: update

* feat: disable tun mode tray menu when tun mode is unavailable

* fix: restart core when uninstall service is canceled

* chore: remove check notification when toggle tun mode

* chore: fix updatelog

---------

Co-authored-by: Tunglies <77394545+Tunglies@users.noreply.github.com>
2025-10-31 17:31:40 +08:00
Tunglies
5187712a71 chore(deps): remove zustand and update vite-plugin-monaco-editor to esm version 2025-10-31 16:55:52 +08:00
renovate[bot]
7b7fa2239b chore(deps): update dependency dayjs to v1.11.19 (#5261)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-31 13:36:35 +08:00
Slinetrac
85ff296912 chore(deps): update deps 2025-10-31 11:28:14 +08:00
renovate[bot]
5e7adf76ca chore(deps): update npm dependencies (#5258)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-31 10:46:42 +08:00
Slinetrac
d094d3885c chore: move CONTRIBUTING_i18n.md to /docs 2025-10-31 10:41:48 +08:00
Tunglies
648c93c066 chore(i18n): update localization files sorting and add i18n contribution guidline
- Add i18n contribution guidline
- Chinese (zh.json): fix TPROXY port missing translation
2025-10-31 05:15:47 +08:00
Tunglies
1e9df69ffc fix: remove unused dependencies from Cargo.toml and Cargo.lock 2025-10-31 00:55:50 +08:00
Tunglies
ef35752d84 fix: specify version for sysproxy dependency in Cargo.toml 2025-10-31 00:33:11 +08:00
Tunglies
ffb7400a22 fix: add updateCargoLock to postUpdateOptions in renovate.json 2025-10-31 00:11:55 +08:00
oomeow
4d5f1f4327 fix: incorrect proxies route 2025-10-30 20:28:56 +08:00
Tunglies
999830aaf5 fix: correct download link for ARM64 Windows setup in autobuild workflow 2025-10-30 20:18:23 +08:00
Tunglies
6d7efbbf28 fix: reorder import statements and enhance normalizeDetailsTags function 2025-10-30 20:08:57 +08:00
Tunglies
a869dbb441 Revert "refactor: profile switch (#5197)"
This reverts commit c2dcd86722.
2025-10-30 18:11:04 +08:00
Tunglies
928f226d10 fix: update clash_verge_service_ipc version to 2.0.21 2025-10-30 18:02:24 +08:00
Tunglies
c27ad3fdcb feat: add log opening functionality in tray menu and update localization 2025-10-30 17:34:41 +08:00
Sline
c2dcd86722 refactor: profile switch (#5197)
* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a65.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
renovate[bot]
af79bcd1cf chore(deps): update dependency react-i18next to v16.2.2 (#5251)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-30 17:12:20 +08:00
Tunglies
2d73afdff2 style: update UPDATELOG.md using details and summary 2025-10-30 17:00:00 +08:00
miewx
e7a9f8f755 add support x-oss-meta-subscription-userinfo (#5234)
* add support x-oss-meta-subscription-userinfo

* Update prfitem.rs

match any subscription-userinfo

* Update prfitem.rs

改为 ends_with 更好

* feat(config): enforce stricter header match for subscription usage

---------

Co-authored-by: i18n <i18n.site@gmail.com>
Co-authored-by: Slinetrac <realakayuki@gmail.com>
2025-10-30 11:24:40 +08:00
renovate[bot]
d209238009 chore(deps): update npm dependencies (#5245)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-30 10:24:35 +08:00
Sukka
dfcdb33e58 chore: use vite-swc-react (#5246) 2025-10-30 10:19:29 +08:00
Tunglies
37359ffc27 fix: add check for allow_auto_update in timer task filtering 2025-10-30 01:40:43 +08:00
oomeow
fb09e6c85d fix: notification can not notify frontend (#5243) 2025-10-30 00:34:45 +08:00
oomeow
d10665091b chore: update eslint ignorePattern 2025-10-29 21:16:18 +08:00
Tunglies
d8b0e9929c fix: include Mihomo-go122 by default for macOS 10.15+ to resolve Intel architecture compatibility issues 2025-10-29 21:14:02 +08:00
Tunglies
73323edf06 chore(deps): update clash_verge_service_ipc to version 2.0.20
Reduce memory usage, avoid duplicated clients
2025-10-29 20:35:45 +08:00
Tunglies
f4de4738f1 refactor(logger): replace ClashLogger with CLASH_LOGGER and update log handling; improve log retrieval and management 2025-10-29 17:58:02 +08:00
Tunglies
2e9f6dd174 fix: prevent service duplicate start_core and early-return after stop_core; fix start failures
Update clash_verge_service_ipc version to 2.0.18
2025-10-29 16:09:19 +08:00
renovate[bot]
e928089a77 chore(deps): update npm dependencies (#5231)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-29 08:29:47 +08:00
Tunglies
f41998284a fix(verge_patch): add tray_inline_proxy_groups handling to update flags and refresh tray 2025-10-29 02:41:07 +08:00
Tunglies
9375674c91 refactor(validate): simplify validation process management and remove unused code 2025-10-28 19:36:17 +08:00
Tunglies
2a7ccb5bde refactor(core): optimize RunningMode handling and improve state management 2025-10-28 19:16:42 +08:00
Tunglies
2af0af0837 refactor(tray): comment out enable_tray_icon references for future removal #5161
Since network speed display in Tray on menu has been removed
2025-10-28 14:37:57 +08:00
renovate[bot]
0fcf168b08 chore(deps): update dependency axios to ^1.13.0 (#5225)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-28 10:47:36 +08:00
Tunglies
f39436f1d0 refactor(i18n): optimize translation handling with Arc<str> for better memory efficiency
refactor(tray): change menu text storage to use Arc<str> for improved performance
refactor(service): utilize SmartString for error messages to enhance memory management
2025-10-28 00:26:20 +08:00
❤是纱雾酱哟~
a9eb512f20 docs(autobuild): update download links for release assets (#5224)
- To match those in actual "Assets" section

Signed-off-by: Dragon1573 <49941141+Dragon1573@users.noreply.github.com>
2025-10-28 00:00:01 +08:00
Tunglies
713162ca37 perf(i18n): change TRANSLATIONS type to use Box<Value> for better memory management
This reduce memory usage from 72 to 48
2025-10-27 23:17:40 +08:00
Tunglies
87168b6ce0 perf(tray): improve menu handling localization support
refactor(tray): replace string literals with MenuIds for menu event handling
2025-10-27 23:08:05 +08:00