Commit Graph

182 Commits

Author SHA1 Message Date
Tunglies
550a7e0bb9 fix: restart core or change can not remember proxy select in Global mode with sidecar #5466 2025-12-02 16:25:25 +08:00
Tunglies
a611f7d8a7 fix: switch reqwest client to use rustls-tls for improved security #5559 2025-12-01 22:22:26 +08:00
Tunglies
6897ead070 perf: change patch_config parameter from Mapping to &Mapping for efficiency 2025-11-30 20:44:21 +08:00
Tunglies
22e2e751a2 refactor(profile): update get_profiles to return SharedBox and optimize clone usage 2025-11-26 11:27:02 +08:00
Tunglies
ecc272aa20 feat: integrate tauri-plugin-clipboard-manager and add system info commands (#5593) 2025-11-25 16:58:25 +08:00
AetherWing
6b3f5eea16 fix: correct flag emoji for ISO alpha-3 region code (#5557)
* fix: correct flag emoji for ISO alpha-3 region code

* fix: use rust_iso3166 to convert ISO3 region code

* fix: validate ISO country code when generating flag emoji

* fix: correct icon encoding for unlock test in specific regions

---------

Co-authored-by: Tunglies <77394545+Tunglies@users.noreply.github.com>
2025-11-24 20:40:32 +08:00
Tunglies
82bed4910e perf: refactor IRuntime to use HashSet for exists_keys and improve related functions performance 2025-11-22 16:25:50 +08:00
Tunglies
8e3273a32c refactor: simplify sysproxy usage in network commands 2025-11-20 19:50:25 +08:00
Tunglies
b3dc48d07e fix(tests): suppress clippy expect warnings in enhance function
fix(timer): improve task removal logic in add_task method
refactor(notification): drop binding after emitting event
chore(Cargo): add lints section to Cargo.toml
2025-11-20 14:59:49 +08:00
Sline
f439e93a2b feat(hotkey): add global reactivate_profiles shortcut (#5527)
* feat(hotkey): add global reactivate_profiles shortcut

* feat(profile): expose validation state for reactivation shortcuts
2025-11-19 17:06:23 +08:00
Slinetrac
94b07b51d6 chore: rename Youtube to YouTube
Closes #5526
2025-11-19 17:04:19 +08:00
Tunglies
8339fabb17 feat(sysinfo): add tauri-plugin-clash-verge-sysinfo for system information retrieval (#5510)
* feat(sysinfo): add tauri-plugin-clash-verge-sysinfo for system information retrieval

* feat(sysinfo): add tauri-plugin-clash-verge-sysinfo for system information retrieval

* fix(service): import Manager trait for app handle in linux_running_as_root function
2025-11-18 15:48:48 +08:00
Tunglies
056af768e5 feat: initialize workspace with clash-verge-draft and clash-verge-logging crates (#5489)
- Add Cargo.toml for workspace management, including dependencies and profiles.
- Create clash-verge-draft crate with basic structure, including a benchmark for Draft functionality.
- Implement Draft management with shared state and asynchronous modifications.
- Add tests for Draft functionality to ensure correctness.
- Create clash-verge-logging crate for logging utilities with structured log types and macros.
- Update src-tauri to use new crates and remove unnecessary configurations.
- Refactor existing code to utilize the new Draft and logging functionalities.
2025-11-17 11:51:50 +08:00
Tunglies
0866b93175 feat(logging): introduce clash-verge-logging crate for management (#5486)
* feat(logging): introduce clash-verge-logging crate for management

- Added a new crate `clash-verge-logging` with dependencies on `log`, `tokio`, `compact_str`, and `flexi_logger`.
- Implemented logging types and macros for structured logging across the application.
- Replaced existing logging imports with the new `clash_verge_logging` crate in various modules.
- Updated logging functionality to support different logging types and error handling.
- Refactored code to improve logging consistency and maintainability.

* fix(logging): update import paths for clash_verge_logging in linux.rs and dns.rs

* fix(logging): update import statement for clash_verge_logging in windows.rs
2025-11-17 10:42:57 +08:00
Tunglies
dbb4877be6 refactor(Draft): management as crate (#5470)
* feat: implement draft functionality with apply and discard methods, and add benchmarks and tests

* Refactor Draft management and integrate Tokio for asynchronous operations

- Introduced a new `IVerge` struct for configuration management.
- Updated `Draft` struct to use `Arc<RwLock>` for better concurrency handling.
- Added asynchronous editing capabilities to `Draft` using Tokio.
- Replaced synchronous editing methods with asynchronous counterparts.
- Updated benchmark tests to reflect changes in the `Draft` API.
- Removed redundant draft utility module and integrated its functionality into the main `Draft` implementation.
- Adjusted tests to validate new behavior and ensure correctness of the `Draft` management flow.
2025-11-16 00:33:21 +08:00
Tunglies
8654bad6b0 refactor: migrate proxy setting guard to sysproxy-rs crate (#5287)
* Refactor proxy management: Remove EventDrivenProxyManager and async_proxy_query

- Removed the EventDrivenProxyManager and its related event-driven proxy management logic.
- Replaced usages of AsyncProxyQuery with direct calls to sysproxy for fetching system and auto proxy configurations.
- Cleaned up constants by removing unused default proxy host.
- Updated sysopt to eliminate calls to the removed EventDrivenProxyManager.
- Adjusted logging and proxy state management to reflect the removal of event-driven architecture.
- Simplified the core module structure by removing unnecessary module imports.

* refactor: remove bypass module for cleaner network configuration

* feat: update sysproxy to version 0.4.0 and add guard feature; refactor sysopt for improved proxy management

* feat(windows/sysproxy): unify guard-type handling and auto-restore drift

* refactor(config): remove commented-out code for SysProxy update flag

---------

Co-authored-by: Slinetrac <realakayuki@gmail.com>
2025-11-15 20:51:36 +08:00
Tunglies
a6cb903fe6 refactor: simplify function return types and remove unnecessary wraps 2025-11-15 08:56:58 +08:00
Tunglies
2b0ba4dc95 refactor: streamline config handling and logging mechanisms 2025-11-15 02:46:34 +08:00
Sline
838e401796 feat(auto-backup): implement centralized auto-backup manager and UI (#5374)
* feat(auto-backup): implement centralized auto-backup manager and UI

- Introduced AutoBackupManager to handle verge settings, run a background scheduler, debounce change-driven backups, and trim auto-labeled archives (keeps 20); wired into startup and config refresh hooks
  (src-tauri/src/module/auto_backup.rs:28-209, src-tauri/src/utils/resolve/mod.rs:64-136, src-tauri/src/feat/config.rs:102-238)

- Extended verge schema and backup helpers so scheduled/change-based settings persist, create_local_backup can rename archives, and profile/global-extend mutations now trigger backups
  (src-tauri/src/config/verge.rs:162-536, src/types/types.d.ts:857-859, src-tauri/src/feat/backup.rs:125-189, src-tauri/src/cmd/profile.rs:66-476, src-tauri/src/cmd/save_profile.rs:21-82)

- Added Auto Backup settings panel in backup dialog with dual toggles + interval selector; localized new strings across all locales
  (src/components/setting/mods/auto-backup-settings.tsx:1-138, src/components/setting/mods/backup-viewer.tsx:28-309, src/locales/en/settings.json:312-326 and mirrored entries)

- Regenerated typed i18n resources for strong typing in React
  (src/types/generated/i18n-keys.ts, src/types/generated/i18n-resources.ts)

* refactor(setting/backup): restructure backup dialog for consistent layout

* refactor(ui): unify settings dialog style

* fix(backup): only trigger auto-backup on valid saves & restore restarts app safely

* fix(backup): scrub console.log leak and rewire WebDAV dialog to actually probe server

* refactor: rename SubscriptionChange to ProfileChange

* chore: update i18n

* chore: WebDAV i18n improvements

* refactor(backup): error handling

* refactor(auto-backup): wrap scheduler startup with maybe_start_runner

* refactor: remove the redundant throw in handleExport

* feat(backup-history-viewer): improve WebDAV handling and UI fallback

* feat(auto-backup): trigger backups on all profile edits & improve interval input UX

* refactor: use InputAdornment

* docs: Changelog.md
2025-11-10 13:49:14 +08:00
Slinetrac
0dcdd7fed6 fix: clippy lint 2025-11-09 23:23:03 +08:00
Tunglies
4eeb883464 refactor: update imports to use as _ for unused identifiers across multiple files 2025-11-09 22:15:37 +08:00
Tunglies
7f267fa727 fix: update configuration data access from latest_arc to data_arc in Clash and WebDAV commands and init windows 2025-11-09 21:26:11 +08:00
Tunglies
9c5cda793d refactor: streamline UI readiness handling by replacing RwLock with AtomicU8 and updating related functions 2025-11-09 02:47:48 +08:00
Sline
c8aa72186e chore: i18n (#5276)
* chore: notice i18n

* feat: add script to clean up unused i18n keys

* chore: cleanup i18n keys

* refactor(i18n/proxies): migrate proxies UI to structured locale keys

* chore: i18n for rule module

* chore: i18n for profile module

* chore: i18n for connections module

* chore: i18n for settings module

* chore: i18n for verge settings

* chore: i18n for theme settings

* chore: i18n for theme

* chore(i18n): components.home.*

* chore(i18n): remove unused i18n keys

* chore(i18n): components.profile.*

* chore(i18n): components.connection

* chore(i18n): pages.logs.*

* chore(i18n): pages.*.provider

* chore(i18n): components.settings.externalCors.*

* chore(i18n): components.settings.clash.*

* chore(i18n): components.settings.liteMode.*

* chore(i18n): components.settings.backup.*

* chore(i18n): components.settings.clash.port.*

* chore(i18n): components.settings.misc.*

* chore(i18n): components.settings.update.*

* chore(i18n): components.settings.sysproxy.*

* chore(i18n): components.settings.sysproxy.*

* chore(i18n): pages.profiles.notices/components.providers.notices

* refactor(notice): unify showNotice usage

* refactor(notice): add typed showNotice shortcuts, centralize defaults, and simplify subscriptions

* refactor: unify showNotice usage

* refactor(notice): unify showNotice API

* refactor(notice): unify showNotice usage

* chore(i18n): components.test.*

* chore(i18n): components.settings.dns.*

* chore(i18n): components.home.clashInfo.*

* chore(i18n): components.home.systemInfo.*

* chore(i18n): components.home.ipInfo/traffic.*

* chore(i18n): navigation.*

* refactor(i18n): remove pages.* namespace and migrate route texts under module-level page keys

* chore(i18n): common.*

* chore(i18n): common.*

* fix: change error handling in patch_profiles_config to return false when a switch is in progress

* fix: improve error handling in patch_profiles_config to prevent requests during profile switching

* fix: change error handling in patch_profiles_config to return false when a switch is in progress

fix: ensure CURRENT_SWITCHING_PROFILE is reset after config updates in perform_config_update and patch_profiles_config

* chore(i18n): restructure root-level locale keys into namespaces

* chore(i18n): add missing i18n keys

* docs: i18n guide

* chore: adjust i18n

* refactor(i18n): align UI actions and status labels with common keys

* refactor(i18n): unify two-name locale namespaces

* refactor(i18n/components): unify locale keys and update component references

* chore(i18n): add shared and entities namespaces to all locale files

* refactor(i18n): consolidate shared and entity namespaces across features

* chore(deps): update npm dependencies to ^7.3.5 (#5310)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* refactor(i18n): migrate shared editor modes and consolidate entities namespaces

* tmp

* refactor(i18n): flatten locales and move theme/validation strings

* docs: CONTRIBUTING_i18n.md

* refactor(i18n): restructure feedback and profile namespaces for better organization

* refactor(i18n): unify settings locale structure and update references

* refactor(i18n): reorganize locale keys for home, proxies, rules, connections, logs, unlock, and tests

* refactor(i18n/feedback/layout): unify shared toasts & normalize layout namespace

* refactor(i18n): centralize common UI strings in shared

* refactor(i18n): flatten headers and unify locale schema

* refactor(i18n): consolidate duplicate per-feature translations into shared namespace

* refactor(i18n): split locales into per-namespace files

* style: lint

* refactor(i18n): unify unlock UI translations under tests namespace

* feat(i18n): add type-checked translation keys

* style: eslint import order

* feat(i18n): replace ad-hoc loader with rust-i18n backend bundles

* chore(prebuild): remove locale-copy step

* fix(i18n, notice): propagate runtime params and update cleanup script path

* fix(i18n,notice): make locale formatting idempotent and guard early notice translations

* fix(i18n): resolve locale aliases and match OS codes correctly

* fix(unlock): use i18next-compatible double-brace interpolation in failure notice

* fix(i18n): route unlock error notices through translation keys

* fix(i18n): i18n types

* feat(i18n): localize upgrade notice for Clash core viewer

* fix(notice): ensure runtime overrides apply to prefix translations

* chore(i18n): replace literal notices with translation keys

* chore(i18n): types

* chore(i18n): regen typings before formatting to keep keys in sync

* chore(i18n): simply labels

* chore(i18n): adjust translation

* chore: remove eslint-plugin-i18next

* chore(i18n): add/refine Korean translations across frontend scopes and Rust backend (#5341)

* chore(i18n): translate settings.json (missed in previous pass) (#5343)

* chore(i18n): add/refine Korean translations across frontend scopes and Rust backend

* chore(i18n): add/refine Korean translations across frontend scopes and Rust backend

* fix(i18n-tauri): quote placeholder-leading value in ko.yml to prevent rust_i18n parse panic

* chore(i18n): translate settings.json (forgot to include previously)

---------

Co-authored-by: rozan <34974262+thelojan@users.noreply.github.com>
2025-11-08 19:40:38 +08:00
Tunglies
538cba5a33 fix: return correct type in get_profiles function 2025-11-08 17:07:48 +08:00
Tunglies
409b16b49f refactor: streamline admin check logic and improve get_running_mode return type (#5325) 2025-11-06 13:37:11 +08:00
Tunglies
9a1465ec4d refactor: replace AtomicI64 with Instant for app start time tracking and simplify uptime calculation 2025-11-06 12:59:05 +08:00
Tunglies
363fa98891 refactor: enhance YouTube Premium check logic and streamline response handling 2025-11-06 11:03:35 +08:00
Tunglies
5a8e83cd49 refactor: change function definitions to const for improved performance and clarity 2025-11-06 10:47:25 +08:00
Tunglies
651513c826 refactor: optimize error message handling and improve cloning in various functions 2025-11-06 10:26:40 +08:00
Tunglies
3e2f605e77 fix: improve error handling and logging in various modules 2025-11-05 02:11:43 +08:00
Tunglies
ab136e463f fix: change error handling in patch_profiles_config to return false when a switch is in progress
fix: improve error handling in patch_profiles_config to prevent requests during profile switching

fix: change error handling in patch_profiles_config to return false when a switch is in progress

fix: ensure CURRENT_SWITCHING_PROFILE is reset after config updates in perform_config_update and patch_profiles_config
2025-11-04 23:05:01 +08:00
Tunglies
bae584b1ab perf: optimize profile handle memory usage 2025-11-04 10:05:17 +08:00
Tunglies
b86ceb26f6 fix: streamline verge configuration fetching and patching functions 2025-11-04 08:01:33 +08:00
Tunglies
b70d45b66a fix: update profile handling to apply and discard changes correctly 2025-11-04 07:58:14 +08:00
Tunglies
2287ea5f0b Refactor configuration access to use latest_arc() instead of latest_ref()
- Updated multiple instances in the codebase to replace calls to latest_ref() with latest_arc() for improved performance and memory management.
- This change affects various modules including validate, enhance, feat (backup, clash, config, profile, proxy, window), utils (draft, i18n, init, network, resolve, server).
- Ensured that all references to configuration data are now using the new arc-based approach to enhance concurrency and reduce cloning overhead.

refactor: update imports to explicitly include ClashInfo and Config in command files
2025-11-04 06:06:20 +08:00
Tunglies
501def6695 refactor: update patch_config method to accept a reference to IProfiles 2025-11-03 08:46:37 +08:00
Tunglies
dce349586c refactor: simplify profile retrieval and remove unused template method 2025-11-03 03:17:33 +08:00
oomeow
d4cb16f4ff perf: select proxy (#5284)
* perf: improve select proxy for group

* chore: update
2025-11-02 22:33:50 +08:00
Tunglies
4a7859bdae refactor: replace hardcoded DNS config filename with constant reference (#5280)
* refactor: replace hardcoded DNS config filename with constant reference

* refactor: remove redundant import of constants in IClashTemp template method

* refactor: add conditional compilation for DEFAULT_REDIR based on OS

* refactor: simplify default TPROXY port handling and remove unused trace_err macro

* refactor: simplify default TPROXY port fallback logic
2025-11-01 22:50:19 +08:00
Tunglies
fb260fb33d Refactor logging to use a centralized logging utility across the application (#5277)
- Replaced direct log calls with a new logging macro that includes a logging type for better categorization.
- Updated logging in various modules including `merge.rs`, `mod.rs`, `tun.rs`, `clash.rs`, `profile.rs`, `proxy.rs`, `window.rs`, `lightweight.rs`, `guard.rs`, `autostart.rs`, `dirs.rs`, `dns.rs`, `scheme.rs`, `server.rs`, and `window_manager.rs`.
- Introduced logging types such as `Core`, `Network`, `ProxyMode`, `Window`, `Lightweight`, `Service`, and `File` to enhance log clarity and filtering.
2025-11-01 20:47:01 +08:00
Tunglies
9370a56337 refactor: reduce clone operation (#5268)
* refactor: optimize item handling and improve profile management

* refactor: update IVerge references to use references instead of owned values

* refactor: update patch_verge to use data_ref for improved data handling

* refactor: move handle_copy function to improve resource initialization logic

* refactor: update profile handling to use references for improved memory efficiency

* refactor: simplify get_item method and update profile item retrieval to use string slices

* refactor: update profile validation and patching to use references for improved performance

* refactor: update profile functions to use references for improved performance and memory efficiency

* refactor: update profile patching functions to use references for improved memory efficiency

* refactor: simplify merge function in PrfOption to enhance readability

* refactor: update change_core function to accept a reference for improved memory efficiency

* refactor: update PrfItem and profile functions to use references for improved memory efficiency

* refactor: update resolve_scheme function to accept a reference for improved memory efficiency

* refactor: update resolve_scheme function to accept a string slice for improved flexibility

* refactor: simplify update_profile parameters and logic
2025-11-01 20:03:56 +08:00
Slinetrac
9dc50da167 fix: profile auto refresh #5274 2025-11-01 19:24:54 +08:00
Tunglies
b3b8eeb577 refactor: convert file operations to async using tokio fs (#5267)
* refactor: convert file operations to async using tokio fs

* refactor: integrate AsyncHandler for file operations in backup processes
2025-11-01 19:24:52 +08:00
Tunglies
a869dbb441 Revert "refactor: profile switch (#5197)"
This reverts commit c2dcd86722.
2025-10-30 18:11:04 +08:00
Tunglies
c27ad3fdcb feat: add log opening functionality in tray menu and update localization 2025-10-30 17:34:41 +08:00
Sline
c2dcd86722 refactor: profile switch (#5197)
* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a65.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
Tunglies
f4de4738f1 refactor(logger): replace ClashLogger with CLASH_LOGGER and update log handling; improve log retrieval and management 2025-10-29 17:58:02 +08:00
Tunglies
f39436f1d0 refactor(i18n): optimize translation handling with Arc<str> for better memory efficiency
refactor(tray): change menu text storage to use Arc<str> for improved performance
refactor(service): utilize SmartString for error messages to enhance memory management
2025-10-28 00:26:20 +08:00
Tunglies
c736796380 feat(clippy): cognitive-complexity rule (#5215)
* feat(config): enhance configuration initialization and validation process

* refactor(profile): streamline profile update logic and enhance error handling

* refactor(config): simplify profile item checks and streamline update flag processing

* refactor(disney_plus): add cognitive complexity allowance for check_disney_plus function

* refactor(enhance): restructure configuration and profile item handling for improved clarity and maintainability

* refactor(tray): add cognitive complexity allowance for create_tray_menu function

* refactor(config): add cognitive complexity allowance for patch_config function

* refactor(profiles): simplify item removal logic by introducing take_item_file_by_uid helper function

* refactor(profile): add new validation logic for profile configuration syntax

* refactor(profiles): improve formatting and readability of take_item_file_by_uid function

* refactor(cargo): change cognitive complexity level from warn to deny

* refactor(cargo): ensure cognitive complexity is denied in Cargo.toml

* refactor(i18n): clean up imports and improve code readability
refactor(proxy): simplify system proxy toggle logic
refactor(service): remove unnecessary `as_str()` conversion in error handling
refactor(tray): modularize tray menu creation for better maintainability

* refactor(tray): update menu item text handling to use references for improved performance
2025-10-27 20:55:51 +08:00