refactor: profile switch (#5197)

* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a65.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
This commit is contained in:
Sline
2025-10-30 17:29:15 +08:00
committed by GitHub
parent af79bcd1cf
commit c2dcd86722
36 changed files with 5912 additions and 1275 deletions

View File

@@ -1,5 +1,4 @@
use super::CmdResult;
use super::StringifyErr;
use super::{CmdResult, StringifyErr, profile_switch};
use crate::{
config::{
Config, IProfiles, PrfItem, PrfOption,
@@ -9,77 +8,191 @@ use crate::{
},
profiles_append_item_safe,
},
core::{CoreManager, handle, timer::Timer, tray::Tray},
feat, logging,
process::AsyncHandler,
ret_err,
core::{CoreManager, handle, timer::Timer},
feat, logging, ret_err,
utils::{dirs, help, logging::Type},
};
use once_cell::sync::Lazy;
use parking_lot::RwLock;
use smartstring::alias::String;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::time::Duration;
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
// 全局请求序列号跟踪,用于避免队列化执行
static CURRENT_REQUEST_SEQUENCE: AtomicU64 = AtomicU64::new(0);
use crate::cmd::profile_switch::{ProfileSwitchStatus, SwitchResultEvent};
static CURRENT_SWITCHING_PROFILE: AtomicBool = AtomicBool::new(false);
#[tauri::command]
pub async fn get_profiles() -> CmdResult<IProfiles> {
// 策略1: 尝试快速获取latest数据
let latest_result = tokio::time::timeout(Duration::from_millis(500), async {
let profiles = Config::profiles().await;
let latest = profiles.latest_ref();
IProfiles {
current: latest.current.clone(),
items: latest.items.clone(),
}
})
.await;
match latest_result {
Ok(profiles) => {
logging!(info, Type::Cmd, "快速获取配置列表成功");
return Ok(profiles);
}
Err(_) => {
logging!(warn, Type::Cmd, "快速获取配置超时(500ms)");
}
}
// 策略2: 如果快速获取失败尝试获取data()
let data_result = tokio::time::timeout(Duration::from_secs(2), async {
let profiles = Config::profiles().await;
let data = profiles.latest_ref();
IProfiles {
current: data.current.clone(),
items: data.items.clone(),
}
})
.await;
match data_result {
Ok(profiles) => {
logging!(info, Type::Cmd, "获取draft配置列表成功");
return Ok(profiles);
}
Err(join_err) => {
logging!(
error,
Type::Cmd,
"获取draft配置任务失败或超时: {}",
join_err
);
}
}
// 策略3: fallback尝试重新创建配置
logging!(warn, Type::Cmd, "所有获取配置策略都失败尝试fallback");
Ok(IProfiles::new().await)
#[derive(Clone)]
struct CachedProfiles {
snapshot: IProfiles,
captured_at: Instant,
}
/// 增强配置文件
static PROFILES_CACHE: Lazy<RwLock<Option<CachedProfiles>>> = Lazy::new(|| RwLock::new(None));
#[derive(Default)]
struct SnapshotMetrics {
fast_hits: AtomicU64,
cache_hits: AtomicU64,
blocking_hits: AtomicU64,
refresh_scheduled: AtomicU64,
last_log_ms: AtomicU64,
}
static SNAPSHOT_METRICS: Lazy<SnapshotMetrics> = Lazy::new(SnapshotMetrics::default);
/// Store the latest snapshot so cache consumers can reuse it without hitting the lock again.
fn update_profiles_cache(snapshot: &IProfiles) {
*PROFILES_CACHE.write() = Some(CachedProfiles {
snapshot: snapshot.clone(),
captured_at: Instant::now(),
});
}
/// Return the cached snapshot and how old it is, if present.
fn cached_profiles_snapshot() -> Option<(IProfiles, u128)> {
PROFILES_CACHE.read().as_ref().map(|entry| {
(
entry.snapshot.clone(),
entry.captured_at.elapsed().as_millis(),
)
})
}
/// Return the latest profiles snapshot, preferring cached data so UI requests never block.
#[tauri::command]
pub async fn get_profiles() -> CmdResult<IProfiles> {
let started_at = Instant::now();
// Resolve snapshots in three tiers so UI reads never stall on a mutex:
// 1) try a non-blocking read, 2) fall back to the last cached copy while a
// writer holds the lock, 3) block and refresh the cache as a final resort.
if let Some(snapshot) = read_profiles_snapshot_nonblocking().await {
let item_count = snapshot
.items
.as_ref()
.map(|items| items.len())
.unwrap_or(0);
update_profiles_cache(&snapshot);
SNAPSHOT_METRICS.fast_hits.fetch_add(1, Ordering::Relaxed);
logging!(
debug,
Type::Cmd,
"[Profiles] Snapshot served (path=fast, items={}, elapsed={}ms)",
item_count,
started_at.elapsed().as_millis()
);
maybe_log_snapshot_metrics();
return Ok(snapshot);
}
if let Some((cached, age_ms)) = cached_profiles_snapshot() {
SNAPSHOT_METRICS.cache_hits.fetch_add(1, Ordering::Relaxed);
logging!(
debug,
Type::Cmd,
"[Profiles] Served cached snapshot while lock busy (age={}ms)",
age_ms
);
schedule_profiles_snapshot_refresh();
maybe_log_snapshot_metrics();
return Ok(cached);
}
let snapshot = read_profiles_snapshot_blocking().await;
let item_count = snapshot
.items
.as_ref()
.map(|items| items.len())
.unwrap_or(0);
update_profiles_cache(&snapshot);
SNAPSHOT_METRICS
.blocking_hits
.fetch_add(1, Ordering::Relaxed);
logging!(
debug,
Type::Cmd,
"[Profiles] Snapshot served (path=blocking, items={}, elapsed={}ms)",
item_count,
started_at.elapsed().as_millis()
);
maybe_log_snapshot_metrics();
Ok(snapshot)
}
/// Try to grab the latest profile data without waiting for the writer.
async fn read_profiles_snapshot_nonblocking() -> Option<IProfiles> {
let profiles = Config::profiles().await;
profiles.try_latest_ref().map(|guard| (**guard).clone())
}
/// Fall back to a blocking read when we absolutely must have fresh data.
async fn read_profiles_snapshot_blocking() -> IProfiles {
let profiles = Config::profiles().await;
let guard = profiles.latest_ref();
(**guard).clone()
}
/// Schedule a background cache refresh once the exclusive lock becomes available again.
fn schedule_profiles_snapshot_refresh() {
crate::process::AsyncHandler::spawn(|| async {
// Once the lock is released we refresh the cached snapshot so the next
// request observes the latest data instead of the stale fallback.
SNAPSHOT_METRICS
.refresh_scheduled
.fetch_add(1, Ordering::Relaxed);
let snapshot = read_profiles_snapshot_blocking().await;
update_profiles_cache(&snapshot);
logging!(
debug,
Type::Cmd,
"[Profiles] Cache refreshed after busy snapshot"
);
});
}
fn maybe_log_snapshot_metrics() {
const LOG_INTERVAL_MS: u64 = 5_000;
let now_ms = current_millis();
let last_ms = SNAPSHOT_METRICS.last_log_ms.load(Ordering::Relaxed);
if now_ms.saturating_sub(last_ms) < LOG_INTERVAL_MS {
return;
}
if SNAPSHOT_METRICS
.last_log_ms
.compare_exchange(last_ms, now_ms, Ordering::SeqCst, Ordering::Relaxed)
.is_err()
{
return;
}
let fast = SNAPSHOT_METRICS.fast_hits.swap(0, Ordering::SeqCst);
let cache = SNAPSHOT_METRICS.cache_hits.swap(0, Ordering::SeqCst);
let blocking = SNAPSHOT_METRICS.blocking_hits.swap(0, Ordering::SeqCst);
let refresh = SNAPSHOT_METRICS.refresh_scheduled.swap(0, Ordering::SeqCst);
if fast == 0 && cache == 0 && blocking == 0 && refresh == 0 {
return;
}
logging!(
debug,
Type::Cmd,
"[Profiles][Metrics] 5s window => fast={}, cache={}, blocking={}, refresh_jobs={}",
fast,
cache,
blocking,
refresh
);
}
fn current_millis() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or(Duration::ZERO)
.as_millis() as u64
}
/// Run the optional enhancement pipeline and refresh Clash when it completes.
#[tauri::command]
pub async fn enhance_profiles() -> CmdResult {
match feat::enhance_profiles().await {
@@ -93,79 +206,106 @@ pub async fn enhance_profiles() -> CmdResult {
Ok(())
}
/// 导入配置文件
/// Download a profile from the given URL and persist it to the local catalog.
#[tauri::command]
pub async fn import_profile(url: std::string::String, option: Option<PrfOption>) -> CmdResult {
logging!(info, Type::Cmd, "[导入订阅] 开始导入: {}", url);
logging!(info, Type::Cmd, "[Profile Import] Begin: {}", url);
// 直接依赖 PrfItem::from_url 自身的超时/重试逻辑,不再使用 tokio::time::timeout 包裹
// Rely on PrfItem::from_url internal timeout/retry logic instead of wrapping with tokio::time::timeout
let item = match PrfItem::from_url(&url, None, None, option).await {
Ok(it) => {
logging!(info, Type::Cmd, "[导入订阅] 下载完成,开始保存配置");
logging!(
info,
Type::Cmd,
"[Profile Import] Download complete; saving configuration"
);
it
}
Err(e) => {
logging!(error, Type::Cmd, "[导入订阅] 下载失败: {}", e);
return Err(format!("导入订阅失败: {}", e).into());
logging!(error, Type::Cmd, "[Profile Import] Download failed: {}", e);
return Err(format!("Profile import failed: {}", e).into());
}
};
match profiles_append_item_safe(item.clone()).await {
Ok(_) => match profiles_save_file_safe().await {
Ok(_) => {
logging!(info, Type::Cmd, "[导入订阅] 配置文件保存成功");
logging!(
info,
Type::Cmd,
"[Profile Import] Configuration file saved successfully"
);
}
Err(e) => {
logging!(error, Type::Cmd, "[导入订阅] 保存配置文件失败: {}", e);
logging!(
error,
Type::Cmd,
"[Profile Import] Failed to save configuration file: {}",
e
);
}
},
Err(e) => {
logging!(error, Type::Cmd, "[导入订阅] 保存配置失败: {}", e);
return Err(format!("导入订阅失败: {}", e).into());
logging!(
error,
Type::Cmd,
"[Profile Import] Failed to persist configuration: {}",
e
);
return Err(format!("Profile import failed: {}", e).into());
}
}
// 立即发送配置变更通知
// Immediately emit a configuration change notification
if let Some(uid) = &item.uid {
logging!(info, Type::Cmd, "[导入订阅] 发送配置变更通知: {}", uid);
logging!(
info,
Type::Cmd,
"[Profile Import] Emitting configuration change event: {}",
uid
);
handle::Handle::notify_profile_changed(uid.clone());
}
// 异步保存配置文件并发送全局通知
// Save configuration asynchronously and emit a global notification
let uid_clone = item.uid.clone();
if let Some(uid) = uid_clone {
// 延迟发送,确保文件已完全写入
// Delay notification to ensure the file is fully written
tokio::time::sleep(Duration::from_millis(100)).await;
handle::Handle::notify_profile_changed(uid);
}
logging!(info, Type::Cmd, "[导入订阅] 导入完成: {}", url);
logging!(info, Type::Cmd, "[Profile Import] Completed: {}", url);
Ok(())
}
/// 调整profile的顺序
/// Move a profile in the list relative to another entry.
#[tauri::command]
pub async fn reorder_profile(active_id: String, over_id: String) -> CmdResult {
match profiles_reorder_safe(active_id, over_id).await {
Ok(_) => {
log::info!(target: "app", "重新排序配置文件");
log::info!(target: "app", "Reordered profiles");
Ok(())
}
Err(err) => {
log::error!(target: "app", "重新排序配置文件失败: {}", err);
Err(format!("重新排序配置文件失败: {}", err).into())
log::error!(target: "app", "Failed to reorder profiles: {}", err);
Err(format!("Failed to reorder profiles: {}", err).into())
}
}
}
/// 创建新的profile
/// 创建一个新的配置文件
/// Create a new profile entry and optionally write its backing file.
#[tauri::command]
pub async fn create_profile(item: PrfItem, file_data: Option<String>) -> CmdResult {
match profiles_append_item_with_filedata_safe(item.clone(), file_data).await {
Ok(_) => {
// 发送配置变更通知
// Emit configuration change notification
if let Some(uid) = &item.uid {
logging!(info, Type::Cmd, "[创建订阅] 发送配置变更通知: {}", uid);
logging!(
info,
Type::Cmd,
"[Profile Create] Emitting configuration change event: {}",
uid
);
handle::Handle::notify_profile_changed(uid.clone());
}
Ok(())
@@ -177,7 +317,7 @@ pub async fn create_profile(item: PrfItem, file_data: Option<String>) -> CmdResu
}
}
/// 更新配置文件
/// Force-refresh a profile from its remote source, if available.
#[tauri::command]
pub async fn update_profile(index: String, option: Option<PrfOption>) -> CmdResult {
match feat::update_profile(index, option, Some(true)).await {
@@ -189,11 +329,11 @@ pub async fn update_profile(index: String, option: Option<PrfOption>) -> CmdResu
}
}
/// 删除配置文件
/// Remove a profile and refresh the running configuration if necessary.
#[tauri::command]
pub async fn delete_profile(index: String) -> CmdResult {
println!("delete_profile: {}", index);
// 使用Send-safe helper函数
// Use send-safe helper function
let should_update = profiles_delete_item_safe(index.clone())
.await
.stringify_err()?;
@@ -203,8 +343,13 @@ pub async fn delete_profile(index: String) -> CmdResult {
match CoreManager::global().update_config().await {
Ok(_) => {
handle::Handle::refresh_clash();
// 发送配置变更通知
logging!(info, Type::Cmd, "[删除订阅] 发送配置变更通知: {}", index);
// Emit configuration change notification
logging!(
info,
Type::Cmd,
"[Profile Delete] Emitting configuration change event: {}",
index
);
handle::Handle::notify_profile_changed(index);
}
Err(e) => {
@@ -216,361 +361,28 @@ pub async fn delete_profile(index: String) -> CmdResult {
Ok(())
}
/// 验证新配置文件的语法
async fn validate_new_profile(new_profile: &String) -> Result<(), ()> {
logging!(info, Type::Cmd, "正在切换到新配置: {}", new_profile);
// 获取目标配置文件路径
let config_file_result = {
let profiles_config = Config::profiles().await;
let profiles_data = profiles_config.latest_ref();
match profiles_data.get_item(new_profile) {
Ok(item) => {
if let Some(file) = &item.file {
let path = dirs::app_profiles_dir().map(|dir| dir.join(file.as_str()));
path.ok()
} else {
None
}
}
Err(e) => {
logging!(error, Type::Cmd, "获取目标配置信息失败: {}", e);
None
}
}
};
// 如果获取到文件路径检查YAML语法
if let Some(file_path) = config_file_result {
if !file_path.exists() {
logging!(
error,
Type::Cmd,
"目标配置文件不存在: {}",
file_path.display()
);
handle::Handle::notice_message(
"config_validate::file_not_found",
format!("{}", file_path.display()),
);
return Err(());
}
// 超时保护
let file_read_result = tokio::time::timeout(
Duration::from_secs(5),
tokio::fs::read_to_string(&file_path),
)
.await;
match file_read_result {
Ok(Ok(content)) => {
let yaml_parse_result = AsyncHandler::spawn_blocking(move || {
serde_yaml_ng::from_str::<serde_yaml_ng::Value>(&content)
})
.await;
match yaml_parse_result {
Ok(Ok(_)) => {
logging!(info, Type::Cmd, "目标配置文件语法正确");
Ok(())
}
Ok(Err(err)) => {
let error_msg = format!(" {err}");
logging!(
error,
Type::Cmd,
"目标配置文件存在YAML语法错误:{}",
error_msg
);
handle::Handle::notice_message(
"config_validate::yaml_syntax_error",
error_msg.clone(),
);
Err(())
}
Err(join_err) => {
let error_msg = format!("YAML解析任务失败: {join_err}");
logging!(error, Type::Cmd, "{}", error_msg);
handle::Handle::notice_message(
"config_validate::yaml_parse_error",
error_msg.clone(),
);
Err(())
}
}
}
Ok(Err(err)) => {
let error_msg = format!("无法读取目标配置文件: {err}");
logging!(error, Type::Cmd, "{}", error_msg);
handle::Handle::notice_message(
"config_validate::file_read_error",
error_msg.clone(),
);
Err(())
}
Err(_) => {
let error_msg = "读取配置文件超时(5秒)".to_string();
logging!(error, Type::Cmd, "{}", error_msg);
handle::Handle::notice_message(
"config_validate::file_read_timeout",
error_msg.clone(),
);
Err(())
}
}
} else {
Ok(())
}
}
/// 执行配置更新并处理结果
async fn restore_previous_profile(prev_profile: String) -> CmdResult<()> {
logging!(info, Type::Cmd, "尝试恢复到之前的配置: {}", prev_profile);
let restore_profiles = IProfiles {
current: Some(prev_profile),
items: None,
};
Config::profiles()
.await
.draft_mut()
.patch_config(restore_profiles)
.stringify_err()?;
Config::profiles().await.apply();
crate::process::AsyncHandler::spawn(|| async move {
if let Err(e) = profiles_save_file_safe().await {
log::warn!(target: "app", "异步保存恢复配置文件失败: {e}");
}
});
logging!(info, Type::Cmd, "成功恢复到之前的配置");
Ok(())
}
async fn handle_success(current_sequence: u64, current_value: Option<String>) -> CmdResult<bool> {
let latest_sequence = CURRENT_REQUEST_SEQUENCE.load(Ordering::SeqCst);
if current_sequence < latest_sequence {
logging!(
info,
Type::Cmd,
"内核操作后发现更新的请求 (序列号: {} < {}),忽略当前结果",
current_sequence,
latest_sequence
);
Config::profiles().await.discard();
return Ok(false);
}
logging!(
info,
Type::Cmd,
"配置更新成功,序列号: {}",
current_sequence
);
Config::profiles().await.apply();
handle::Handle::refresh_clash();
if let Err(e) = Tray::global().update_tooltip().await {
log::warn!(target: "app", "异步更新托盘提示失败: {e}");
}
if let Err(e) = Tray::global().update_menu().await {
log::warn!(target: "app", "异步更新托盘菜单失败: {e}");
}
if let Err(e) = profiles_save_file_safe().await {
log::warn!(target: "app", "异步保存配置文件失败: {e}");
}
if let Some(current) = &current_value {
logging!(
info,
Type::Cmd,
"向前端发送配置变更事件: {}, 序列号: {}",
current,
current_sequence
);
handle::Handle::notify_profile_changed(current.clone());
}
CURRENT_SWITCHING_PROFILE.store(false, Ordering::SeqCst);
Ok(true)
}
async fn handle_validation_failure(
error_msg: String,
current_profile: Option<String>,
) -> CmdResult<bool> {
logging!(warn, Type::Cmd, "配置验证失败: {}", error_msg);
Config::profiles().await.discard();
if let Some(prev_profile) = current_profile {
restore_previous_profile(prev_profile).await?;
}
handle::Handle::notice_message("config_validate::error", error_msg);
CURRENT_SWITCHING_PROFILE.store(false, Ordering::SeqCst);
Ok(false)
}
async fn handle_update_error<E: std::fmt::Display>(e: E, current_sequence: u64) -> CmdResult<bool> {
logging!(
warn,
Type::Cmd,
"更新过程发生错误: {}, 序列号: {}",
e,
current_sequence
);
Config::profiles().await.discard();
handle::Handle::notice_message("config_validate::boot_error", e.to_string());
CURRENT_SWITCHING_PROFILE.store(false, Ordering::SeqCst);
Ok(false)
}
async fn handle_timeout(current_profile: Option<String>, current_sequence: u64) -> CmdResult<bool> {
let timeout_msg = "配置更新超时(30秒),可能是配置验证或核心通信阻塞";
logging!(
error,
Type::Cmd,
"{}, 序列号: {}",
timeout_msg,
current_sequence
);
Config::profiles().await.discard();
if let Some(prev_profile) = current_profile {
restore_previous_profile(prev_profile).await?;
}
handle::Handle::notice_message("config_validate::timeout", timeout_msg);
CURRENT_SWITCHING_PROFILE.store(false, Ordering::SeqCst);
Ok(false)
}
async fn perform_config_update(
current_sequence: u64,
current_value: Option<String>,
current_profile: Option<String>,
) -> CmdResult<bool> {
logging!(
info,
Type::Cmd,
"开始内核配置更新,序列号: {}",
current_sequence
);
let update_result = tokio::time::timeout(
Duration::from_secs(30),
CoreManager::global().update_config(),
)
.await;
match update_result {
Ok(Ok((true, _))) => handle_success(current_sequence, current_value).await,
Ok(Ok((false, error_msg))) => handle_validation_failure(error_msg, current_profile).await,
Ok(Err(e)) => handle_update_error(e, current_sequence).await,
Err(_) => handle_timeout(current_profile, current_sequence).await,
}
}
/// 修改profiles的配置
/// Apply partial profile list updates through the switching workflow.
#[tauri::command]
pub async fn patch_profiles_config(profiles: IProfiles) -> CmdResult<bool> {
if CURRENT_SWITCHING_PROFILE.load(Ordering::SeqCst) {
logging!(info, Type::Cmd, "当前正在切换配置,放弃请求");
return Ok(false);
}
CURRENT_SWITCHING_PROFILE.store(true, Ordering::SeqCst);
// 为当前请求分配序列号
let current_sequence = CURRENT_REQUEST_SEQUENCE.fetch_add(1, Ordering::SeqCst) + 1;
let target_profile = profiles.current.clone();
logging!(
info,
Type::Cmd,
"开始修改配置文件,请求序列号: {}, 目标profile: {:?}",
current_sequence,
target_profile
);
let latest_sequence = CURRENT_REQUEST_SEQUENCE.load(Ordering::SeqCst);
if current_sequence < latest_sequence {
logging!(
info,
Type::Cmd,
"获取锁后发现更新的请求 (序列号: {} < {}),放弃当前请求",
current_sequence,
latest_sequence
);
return Ok(false);
}
// 保存当前配置,以便在验证失败时恢复
let current_profile = Config::profiles().await.latest_ref().current.clone();
logging!(info, Type::Cmd, "当前配置: {:?}", current_profile);
// 如果要切换配置,先检查目标配置文件是否有语法错误
if let Some(new_profile) = profiles.current.as_ref()
&& current_profile.as_ref() != Some(new_profile)
&& validate_new_profile(new_profile).await.is_err()
{
CURRENT_SWITCHING_PROFILE.store(false, Ordering::SeqCst);
return Ok(false);
}
// 检查请求有效性
let latest_sequence = CURRENT_REQUEST_SEQUENCE.load(Ordering::SeqCst);
if current_sequence < latest_sequence {
logging!(
info,
Type::Cmd,
"在核心操作前发现更新的请求 (序列号: {} < {}),放弃当前请求",
current_sequence,
latest_sequence
);
return Ok(false);
}
// 更新profiles配置
logging!(
info,
Type::Cmd,
"正在更新配置草稿,序列号: {}",
current_sequence
);
let current_value = profiles.current.clone();
let _ = Config::profiles().await.draft_mut().patch_config(profiles);
// 在调用内核前再次验证请求有效性
let latest_sequence = CURRENT_REQUEST_SEQUENCE.load(Ordering::SeqCst);
if current_sequence < latest_sequence {
logging!(
info,
Type::Cmd,
"在内核交互前发现更新的请求 (序列号: {} < {}),放弃当前请求",
current_sequence,
latest_sequence
);
Config::profiles().await.discard();
return Ok(false);
}
perform_config_update(current_sequence, current_value, current_profile).await
profile_switch::patch_profiles_config(profiles).await
}
/// 根据profile name修改profiles
/// Switch to the provided profile index and wait for completion before returning.
#[tauri::command]
pub async fn patch_profiles_config_by_profile_index(profile_index: String) -> CmdResult<bool> {
logging!(info, Type::Cmd, "切换配置到: {}", profile_index);
let profiles = IProfiles {
current: Some(profile_index),
items: None,
};
patch_profiles_config(profiles).await
profile_switch::patch_profiles_config_by_profile_index(profile_index).await
}
/// 修改某个profile item的
/// Enqueue a profile switch request and optionally notify on success.
#[tauri::command]
pub async fn switch_profile(profile_index: String, notify_success: bool) -> CmdResult<bool> {
profile_switch::switch_profile(profile_index, notify_success).await
}
/// Update a specific profile item and refresh timers if its schedule changed.
#[tauri::command]
pub async fn patch_profile(index: String, profile: PrfItem) -> CmdResult {
// 保存修改前检查是否有更新 update_interval
// Check for update_interval changes before saving
let profiles = Config::profiles().await;
let should_refresh_timer = if let Ok(old_profile) = profiles.latest_ref().get_item(&index) {
let old_interval = old_profile.option.as_ref().and_then(|o| o.update_interval);
@@ -589,15 +401,19 @@ pub async fn patch_profile(index: String, profile: PrfItem) -> CmdResult {
.await
.stringify_err()?;
// 如果更新间隔或允许自动更新变更,异步刷新定时器
// If the interval or auto-update flag changes, refresh the timer asynchronously
if should_refresh_timer {
let index_clone = index.clone();
crate::process::AsyncHandler::spawn(move || async move {
logging!(info, Type::Timer, "定时器更新间隔已变更,正在刷新定时器...");
logging!(
info,
Type::Timer,
"Timer interval changed; refreshing timer..."
);
if let Err(e) = crate::core::Timer::global().refresh().await {
logging!(error, Type::Timer, "刷新定时器失败: {}", e);
logging!(error, Type::Timer, "Failed to refresh timer: {}", e);
} else {
// 刷新成功后发送自定义事件,不触发配置重载
// After refreshing successfully, emit a custom event without triggering a reload
crate::core::handle::Handle::notify_timer_updated(index_clone);
}
});
@@ -606,7 +422,7 @@ pub async fn patch_profile(index: String, profile: PrfItem) -> CmdResult {
Ok(())
}
/// 查看配置文件
/// Open the profile file in the system viewer.
#[tauri::command]
pub async fn view_profile(index: String) -> CmdResult {
let profiles = Config::profiles().await;
@@ -628,7 +444,7 @@ pub async fn view_profile(index: String) -> CmdResult {
help::open_file(path).stringify_err()
}
/// 读取配置文件内容
/// Return the raw YAML contents for the given profile file.
#[tauri::command]
pub async fn read_profile_file(index: String) -> CmdResult<String> {
let profiles = Config::profiles().await;
@@ -638,10 +454,22 @@ pub async fn read_profile_file(index: String) -> CmdResult<String> {
Ok(data)
}
/// 获取下一次更新时间
/// Report the scheduled refresh timestamp (if any) for the profile timer.
#[tauri::command]
pub async fn get_next_update_time(uid: String) -> CmdResult<Option<i64>> {
let timer = Timer::global();
let next_time = timer.get_next_update_time(&uid).await;
Ok(next_time)
}
/// Return the latest driver snapshot describing active and queued switch tasks.
#[tauri::command]
pub async fn get_profile_switch_status() -> CmdResult<ProfileSwitchStatus> {
profile_switch::get_switch_status()
}
/// Fetch switch result events newer than the provided sequence number.
#[tauri::command]
pub async fn get_profile_switch_events(after_sequence: u64) -> CmdResult<Vec<SwitchResultEvent>> {
profile_switch::get_switch_events(after_sequence)
}