Commit Graph

54 Commits

Author SHA1 Message Date
Tunglies
3e2f605e77 fix: improve error handling and logging in various modules 2025-11-05 02:11:43 +08:00
Tunglies
ab136e463f fix: change error handling in patch_profiles_config to return false when a switch is in progress
fix: improve error handling in patch_profiles_config to prevent requests during profile switching

fix: change error handling in patch_profiles_config to return false when a switch is in progress

fix: ensure CURRENT_SWITCHING_PROFILE is reset after config updates in perform_config_update and patch_profiles_config
2025-11-04 23:05:01 +08:00
Tunglies
bae584b1ab perf: optimize profile handle memory usage 2025-11-04 10:05:17 +08:00
Tunglies
b70d45b66a fix: update profile handling to apply and discard changes correctly 2025-11-04 07:58:14 +08:00
Tunglies
2287ea5f0b Refactor configuration access to use latest_arc() instead of latest_ref()
- Updated multiple instances in the codebase to replace calls to latest_ref() with latest_arc() for improved performance and memory management.
- This change affects various modules including validate, enhance, feat (backup, clash, config, profile, proxy, window), utils (draft, i18n, init, network, resolve, server).
- Ensured that all references to configuration data are now using the new arc-based approach to enhance concurrency and reduce cloning overhead.

refactor: update imports to explicitly include ClashInfo and Config in command files
2025-11-04 06:06:20 +08:00
Tunglies
501def6695 refactor: update patch_config method to accept a reference to IProfiles 2025-11-03 08:46:37 +08:00
Tunglies
dce349586c refactor: simplify profile retrieval and remove unused template method 2025-11-03 03:17:33 +08:00
oomeow
d4cb16f4ff perf: select proxy (#5284)
* perf: improve select proxy for group

* chore: update
2025-11-02 22:33:50 +08:00
Tunglies
fb260fb33d Refactor logging to use a centralized logging utility across the application (#5277)
- Replaced direct log calls with a new logging macro that includes a logging type for better categorization.
- Updated logging in various modules including `merge.rs`, `mod.rs`, `tun.rs`, `clash.rs`, `profile.rs`, `proxy.rs`, `window.rs`, `lightweight.rs`, `guard.rs`, `autostart.rs`, `dirs.rs`, `dns.rs`, `scheme.rs`, `server.rs`, and `window_manager.rs`.
- Introduced logging types such as `Core`, `Network`, `ProxyMode`, `Window`, `Lightweight`, `Service`, and `File` to enhance log clarity and filtering.
2025-11-01 20:47:01 +08:00
Tunglies
9370a56337 refactor: reduce clone operation (#5268)
* refactor: optimize item handling and improve profile management

* refactor: update IVerge references to use references instead of owned values

* refactor: update patch_verge to use data_ref for improved data handling

* refactor: move handle_copy function to improve resource initialization logic

* refactor: update profile handling to use references for improved memory efficiency

* refactor: simplify get_item method and update profile item retrieval to use string slices

* refactor: update profile validation and patching to use references for improved performance

* refactor: update profile functions to use references for improved performance and memory efficiency

* refactor: update profile patching functions to use references for improved memory efficiency

* refactor: simplify merge function in PrfOption to enhance readability

* refactor: update change_core function to accept a reference for improved memory efficiency

* refactor: update PrfItem and profile functions to use references for improved memory efficiency

* refactor: update resolve_scheme function to accept a reference for improved memory efficiency

* refactor: update resolve_scheme function to accept a string slice for improved flexibility

* refactor: simplify update_profile parameters and logic
2025-11-01 20:03:56 +08:00
Slinetrac
9dc50da167 fix: profile auto refresh #5274 2025-11-01 19:24:54 +08:00
Tunglies
b3b8eeb577 refactor: convert file operations to async using tokio fs (#5267)
* refactor: convert file operations to async using tokio fs

* refactor: integrate AsyncHandler for file operations in backup processes
2025-11-01 19:24:52 +08:00
Tunglies
a869dbb441 Revert "refactor: profile switch (#5197)"
This reverts commit c2dcd86722.
2025-10-30 18:11:04 +08:00
Sline
c2dcd86722 refactor: profile switch (#5197)
* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a65.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
Tunglies
c736796380 feat(clippy): cognitive-complexity rule (#5215)
* feat(config): enhance configuration initialization and validation process

* refactor(profile): streamline profile update logic and enhance error handling

* refactor(config): simplify profile item checks and streamline update flag processing

* refactor(disney_plus): add cognitive complexity allowance for check_disney_plus function

* refactor(enhance): restructure configuration and profile item handling for improved clarity and maintainability

* refactor(tray): add cognitive complexity allowance for create_tray_menu function

* refactor(config): add cognitive complexity allowance for patch_config function

* refactor(profiles): simplify item removal logic by introducing take_item_file_by_uid helper function

* refactor(profile): add new validation logic for profile configuration syntax

* refactor(profiles): improve formatting and readability of take_item_file_by_uid function

* refactor(cargo): change cognitive complexity level from warn to deny

* refactor(cargo): ensure cognitive complexity is denied in Cargo.toml

* refactor(i18n): clean up imports and improve code readability
refactor(proxy): simplify system proxy toggle logic
refactor(service): remove unnecessary `as_str()` conversion in error handling
refactor(tray): modularize tray menu creation for better maintainability

* refactor(tray): update menu item text handling to use references for improved performance
2025-10-27 20:55:51 +08:00
Tunglies
a05ea64bcd perf: utilize smartstring for string handling (#5149)
* perf: utilize smartstring for string handling

- Updated various modules to replace standard String with smartstring::alias::String for improved performance and memory efficiency.
- Adjusted string manipulations and conversions throughout the codebase to ensure compatibility with the new smartstring type.
- Enhanced readability and maintainability by using `.into()` for conversions where applicable.
- Ensured that all instances of string handling in configuration, logging, and network management leverage the benefits of smartstring.

* fix: replace wrap_err with stringify_err for better error handling in UWP tool invocation

* refactor: update import path for StringifyErr and adjust string handling in sysopt

* fix: correct import path for CmdResult in UWP module

* fix: update argument type for execute_sysproxy_command to use std::string::String

* fix: add missing CmdResult import in UWP platform module

* fix: improve string handling and error messaging across multiple files

* style: format code for improved readability and consistency across multiple files

* fix: remove unused file
2025-10-22 16:25:44 +08:00
Tunglies
8e20b1b0a0 feat: enhance profile update logic to include auto-update option handling 2025-10-18 17:40:55 +08:00
Tunglies
bb2059c76f fix: resolve issue with file deletion during subscription removal 2025-10-14 17:56:38 +08:00
Tunglies
3d96a575c0 refactor: streamline profile import logic and enhance error handling (#5051) 2025-10-14 12:39:22 +08:00
Tunglies
8c0af66ca9 Refactor logging macros to remove print control parameter
- Updated logging macros to eliminate the boolean parameter for print control, simplifying the logging calls throughout the codebase.
- Adjusted all logging calls in various modules (lib.rs, lightweight.rs, help.rs, init.rs, logging.rs, resolve/mod.rs, resolve/scheme.rs, resolve/ui.rs, resolve/window.rs, server.rs, singleton.rs, window_manager.rs) to reflect the new macro structure.
- Ensured consistent logging behavior across the application by standardizing the logging format.
2025-10-10 13:05:37 +08:00
oomeow
7fc238c27b refactor: invock mihomo api by use tauri-plugin-mihomo (#4926)
* feat: add tauri-plugin-mihomo

* refactor: invock mihomo api by use tauri-plugin-mihomo

* chore: todo

* chore: update

* chore: update

* chore: update

* chore: update

* fix: incorrect delay status and update pretty config

* chore: update

* chore: remove cache

* chore: update

* chore: update

* fix: app freezed when change group proxy

* chore: update

* chore: update

* chore: add rustfmt.toml to tauri-plugin-mihomo

* chore: happy clippy

* refactor: connect mihomo websocket

* chore: update

* chore: update

* fix: parse bigint to number

* chore: update

* Revert "fix: parse bigint to number"

This reverts commit 74c006522e.

* chore: use number instead of bigint

* chore: cleanup

* fix: rule data not refresh when switch profile

* chore: update

* chore: cleanup

* chore: update

* fix: traffic graph data display

* feat: add ipc connection pool

* chore: update

* chore: clippy

* fix: incorrect delay status

* fix: typo

* fix: empty proxies tray menu

* chore: clippy

* chore: import tauri-plugin-mihomo by using git repo

* chore: cleanup

* fix: mihomo api

* fix: incorrect delay status

* chore: update tauri-plugin-mihomo dep

chore: update
2025-10-08 12:32:40 +08:00
Sline
39673af46f fix: ensure frontend sync after profile create/delete (#4956) 2025-10-07 07:17:21 +08:00
Tunglies
251678493c edition 2024 (#4702)
* feat: update Cargo.toml for 2024 edition and optimize release profiles

* feat: refactor environment variable settings for Linux and improve code organization

* Refactor conditional statements to use `&&` for improved readability

- Updated multiple files to combine nested `if let` statements using `&&` for better clarity and conciseness.
- This change enhances the readability of the code by reducing indentation levels and making the conditions more straightforward.
- Affected files include: media_unlock_checker.rs, profile.rs, clash.rs, profiles.rs, async_proxy_query.rs, core.rs, handle.rs, hotkey.rs, service.rs, timer.rs, tray/mod.rs, merge.rs, seq.rs, config.rs, proxy.rs, window.rs, general.rs, dirs.rs, i18n.rs, init.rs, network.rs, and window.rs in the resolve module.

* refactor: streamline conditional checks using `&&` for improved readability

* fix: update release profile settings for panic behavior and optimization

* fix: adjust optimization level in Cargo.toml and reorder imports in lightweight.rs
2025-09-10 09:49:06 +08:00
Tunglies
55b95a1985 Revert "feat: update Cargo.toml for 2024 edition and optimize release profiles (#4681)"
This reverts commit 31e3104c7f.
2025-09-08 21:48:09 +08:00
Tunglies
31e3104c7f feat: update Cargo.toml for 2024 edition and optimize release profiles (#4681)
* feat: update Cargo.toml for 2024 edition and optimize release profiles

* feat: refactor environment variable settings for Linux and improve code organization

* Refactor conditional statements to use `&&` for improved readability

- Updated multiple files to combine nested `if let` statements using `&&` for better clarity and conciseness.
- This change enhances the readability of the code by reducing indentation levels and making the conditions more straightforward.
- Affected files include: media_unlock_checker.rs, profile.rs, clash.rs, profiles.rs, async_proxy_query.rs, core.rs, handle.rs, hotkey.rs, service.rs, timer.rs, tray/mod.rs, merge.rs, seq.rs, config.rs, proxy.rs, window.rs, general.rs, dirs.rs, i18n.rs, init.rs, network.rs, and window.rs in the resolve module.

* refactor: streamline conditional checks using `&&` for improved readability
2025-09-08 13:57:32 +08:00
Tunglies
eaaae3b393 fix: improve stability during profile switching to prevent crashes 2025-08-30 07:40:31 +08:00
Tunglies
3939741a06 refactor: migrate from serde_yaml to serde_yaml_ng for improved YAML handling (#4568)
* refactor: migrate from serde_yaml to serde_yaml_ng for improved YAML handling

* refactor: format code for better readability in DNS configuration
2025-08-30 02:24:47 +08:00
Tunglies
6eecd70bd5 fix(subscription): resolve issues causing import failures in some cases #4534, #4436, #4552, #4519, #4517, #4503, #4336, #4301 (#4553)
* fix(subscription): resolve issues causing import failures in some cases #4534, #4436, #4552, #4519, #4517, #4503, #4336, #4301

* fix(profile): update profile creation to include file data handling

* fix(app): improve singleton instance exit handling

* fix: remove unsued handle method
2025-08-29 17:46:46 +08:00
Tunglies
d23d1d9a1d fix: remove auto clean up profiles behavior in resolve process 2025-08-28 04:41:12 +08:00
Tunglies
355a18e5eb refactor(async): migrate from sync-blocking async execution to true async with unified AsyncHandler::spawn (#4502)
* feat: replace all tokio::spawn with unified AsyncHandler::spawn

- 🚀 Core Improvements:
  * Replace all tokio::spawn calls with AsyncHandler::spawn for unified Tauri async task management
  * Prioritize converting sync functions to async functions to reduce spawn usage
  * Use .await directly in async contexts instead of spawn

- 🔧 Major Changes:
  * core/hotkey.rs: Use AsyncHandler::spawn for hotkey callback functions
  * module/lightweight.rs: Async lightweight mode switching
  * feat/window.rs: Convert window operation functions to async, use .await internally
  * feat/proxy.rs, feat/clash.rs: Async proxy and mode switching functions
  * lib.rs: Window focus handling with AsyncHandler::spawn
  * core/tray/mod.rs: Complete async tray event handling

-  Technical Advantages:
  * Unified task tracking and debugging capabilities (via tokio-trace feature)
  * Better error handling and task management
  * Consistency with Tauri runtime
  * Reduced async boundaries for better performance

- 🧪 Verification:
  * Compilation successful with 0 errors, 0 warnings
  * Maintains complete original functionality
  * Optimized async execution flow

* feat: complete tokio fs migration and replace tokio::spawn with AsyncHandler

🚀 Major achievements:
- Migrate 8 core modules from std::fs to tokio::fs
- Create 6 Send-safe wrapper functions using spawn_blocking pattern
- Replace all tokio::spawn calls with AsyncHandler::spawn for unified async task management
- Solve all 19 Send trait compilation errors through innovative spawn_blocking architecture

🔧 Core changes:
- config/profiles.rs: Add profiles_*_safe functions to handle Send trait constraints
- cmd/profile.rs: Update all Tauri commands to use Send-safe operations
- config/prfitem.rs: Replace append_item calls with profiles_append_item_safe
- utils/help.rs: Convert YAML operations to async (read_yaml, save_yaml)
- Multiple modules: Replace tokio::task::spawn_blocking with AsyncHandler::spawn_blocking

 Technical innovations:
- spawn_blocking wrapper pattern resolves parking_lot RwLock Send trait conflicts
- Maintain parking_lot performance while achieving Tauri async command compatibility
- Preserve backwards compatibility with gradual migration strategy

🎯 Results:
- Zero compilation errors
- Zero warnings
- All async file operations working correctly
- Complete Send trait compliance for Tauri commands

* feat: refactor app handle and command functions to use async/await for improved performance

* feat: update async handling in profiles and logging functions for improved error handling and performance

* fix: update TRACE_MINI_SIZE constant to improve task logging threshold

* fix(windows): convert service management functions to async for improved performance

* fix: convert service management functions to async for improved responsiveness

* fix(ubuntu): convert install and reinstall service functions to async for improved performance

* fix(linux): convert uninstall_service function to async for improved performance

* fix: convert uninstall_service call to async for improved performance

* fix: convert file and directory creation calls to async for improved performance

* fix: convert hotkey functions to async for improved responsiveness

* chore: update UPDATELOG.md for v2.4.1 with major improvements and performance optimizations
2025-08-26 01:49:51 +08:00
Tunglies
0d070fb934 refactor: update AppHandle usage to use Arc<AppHandle> for improved memory management (#4491)
* refactor: update AppHandle usage to use Arc<AppHandle> for improved memory management

* fix: clippy ci

* fix: ensure default_latency_test is safely accessed with non-null assertion
2025-08-23 00:20:58 +08:00
Tunglies
6d112c387d refactor: replace tokio::task::spawn_blocking with AsyncHandler::spawn_blocking for improved task management 2025-08-22 04:05:35 +08:00
Tunglies
499626b946 fix: resolve intermittent startup deadlock issues
- Optimize configuration access locks to prevent race conditions
- Enhance UI monitoring thread with non-blocking lock operations
- Improve window creation timing and synchronization
- Add comprehensive deadlock detection and debugging logs
- Simplify code structure with better error handling patterns
- Update changelog with user-friendly descriptions
2025-08-06 22:12:00 +08:00
Tunglies
44e8a035aa fix: improve profile import validation and handle async lock correctly
fix: refactor import_profile function for improved readability and maintainability
2025-08-05 23:23:11 +08:00
Tunglies
a66393c609 feat: enhance profile import functionality with timeout and robust refresh strategy 2025-08-05 20:29:36 +08:00
Tunglies
764ef48fd1 refactor(Draft): Replace latest() with latest_ref() and data() with data_mut() in multiple files for improved mutability handling and consistency across the codebase (#3987)
* feat: add benchmarking for draft operations and new draft management structure

* Refactor Config Access: Replace `latest()` with `latest_ref()` and `data()` with `data_mut()` in multiple files for improved mutability handling and consistency across the codebase.

* refactor: remove DraftNew implementation and related benchmarks for cleaner codebase
2025-07-04 22:43:23 +08:00
Tunglies
a574ced428 Refactor logging statements to use the new formatting syntax for improved readability and consistency across the codebase. This includes updating error, warning, and info logs in various modules such as system commands, configuration, core functionalities, and utilities. Additionally, minor adjustments were made to string formatting in backup and proxy features to enhance clarity. 2025-06-27 23:30:59 +08:00
wonfen
1a6454ee79 perf: optimize profile switching logic with interrupt support to prevent freeze 2025-06-21 10:04:12 +08:00
Tunglies
a8257e8cb2 fix: unexpected behavior while pulling resources (#3789)
fix: unexpected behavior while pulling resources

在 commit 25cfd162f6 中引入新的 pnpm prepare 执行脚本指令,而 prepare 关键字与包管理器的生命周期脚本冲突。导致在运行 Workflow 的时候被执行两次资源拉取,并且被当做生命周期脚本的时候没有携带预期的 \`${{ matrix.target }}\`。这一行为进一步影响 macOS Intel X86 平台上的构建。

所影响的 issues: #3753, #3771
2025-06-18 01:11:33 +08:00
wonfen
ac7307b1f7 feat: enhance profile management and proxy refresh with improved event listening and state updates 2025-06-17 11:38:53 +08:00
wonfen
d05b1c3130 feat: clean up redundant profiles files 2025-06-09 13:48:38 +08:00
Tunglies
689042df60 refactor: use Box to store large config objects and add memory usage tests
- Refactored config-related structs to use Box for storing large objects (e.g., IRuntime, IProfiles, PrfItem) to reduce stack memory usage and improve performance.
- Updated related methods and assignments to handle Boxed types correctly.
- Added and improved unit tests to compare memory usage between Boxed and non-Boxed config objects, demonstrating the memory efficiency of Box.
- Test output now shows the size difference between stack-allocated and heap-allocated (Box) config objects.
2025-06-06 14:54:48 +08:00
wonfen
72783e3ff5 feat: add global mutex to prevent concurrent config updates 2025-06-01 20:54:14 +08:00
wonfen
af1689ee07 refactor: add debounce to optimize config updates and provider refresh handling 2025-05-25 21:34:48 +08:00
wonfen
b70e202201 refactor: simplify frontend event dispatch by restructuring notification system 2025-05-18 20:58:59 +08:00
wonfen
ff5a2c6ca4 feat: add backend config change listener to update frontend state 2025-05-04 13:28:08 +08:00
wonfen
d6a79316a6 feat: toggle next auto-update time on subscription card click and show update result feedback 2025-04-25 17:17:34 +08:00
Tunglies
0b8d08d13b Update logging macros usage in timer module
Replace log macros with custom logging macros in timer module
2025-04-05 11:36:38 +08:00
wonfen
1bd503a654 feat: add error prompt for initial config loading to prevent switching to invalid subscription 2025-03-30 10:56:30 +08:00
Tunglies
9ebde802d4 Update alpha workflow to trigger on src directory changes 2025-03-29 12:35:49 +08:00