mirror of
https://github.com/clash-verge-rev/clash-verge-rev.git
synced 2026-01-29 08:45:41 +08:00
feat: migrate mihomo to use kode-bridge IPC on Windows and Unix (#4051)
* Refactor Mihomo API integration and remove crate_mihomo_api
- Removed the `mihomo_api` crate and its dependencies from the project.
- Introduced `IpcManager` for handling IPC communication with Mihomo.
- Implemented IPC methods for managing proxies, connections, and configurations.
- Updated `MihomoManager` to utilize `IpcManager` instead of the removed crate.
- Added platform-specific IPC socket path handling for macOS, Linux, and Windows.
- Cleaned up related tests and configuration files.
* fix: remove duplicate permission entry in desktop capabilities
* refactor: replace MihomoManager with IpcManager and remove Mihomo module
* fix: restore tempfile dependency in dev-dependencies
* fix: update kode-bridge dependency to use git source from the dev branch
* feat: migrate mihomo to use kode-bridge IPC on Windows
This commit implements a comprehensive migration from legacy service IPC to the kode-bridge library for Windows IPC communication. Key changes include:
Replace service_ipc with kode-bridge IpcManager for all mihomo communications
Simplify proxy commands using new caching mechanism with ProxyRequestCache
Add Windows named pipe (\.\pipe\mihomo) and Unix socket IPC endpoint configuration
Update Tauri permissions and dependencies (dashmap, tauri-plugin-notification)
Add IPC logging support and improve error handling
Fix Windows IPC path handling in directory utilities
This migration enables better cross-platform IPC support and improved performance for mihomo proxy core communication.
* doc: add IPC communication with Mihomo kernel, removing Restful API dependency
* fix: standardize logging type naming from IPC to Ipc for consistency
* refactor: clean up and optimize code structure across multiple components and services
- Removed unnecessary comments and whitespace in various files.
- Improved code readability and maintainability by restructuring functions and components.
- Updated localization files for consistency and accuracy.
- Enhanced performance by optimizing hooks and utility functions.
- General code cleanup in settings, pages, and services to adhere to best practices.
* fix: simplify URL formatting in test_proxy_delay method
* fix: update kode-bridge dependency to version 0.1.3 and change source to crates.io
* fix: update macOS target versions in development workflow
* Revert "fix: update macOS target versions in development workflow"
This reverts commit b9831357e4.
* feat: enhance IPC path handling for Unix systems and improve directory safety checks
* feat: add conditional compilation for Unix-specific IPC path handling
* chore: update cagro.lock
* feat: add external controller configuration and UI support
* Refactor proxy and connection management to use IPC-based commands
- Updated `get_proxies` function in `proxy.rs` to call the new IPC command.
- Renamed `get_refresh_proxies` to `get_proxies` in `ipc/general.rs` for consistency.
- Added new IPC commands for managing proxies, connections, and configurations in `cmds.ts`.
- Refactored API calls in various components to use the new IPC commands instead of HTTP requests.
- Improved error handling and response management in the new IPC functions.
- Cleaned up unused API functions in `api.ts` and redirected relevant calls to `cmds.ts`.
- Enhanced connection management features including health checks and updates for proxy providers.
* chore: update dependencies and improve error handling in IPC manager
* fix: downgrade zip dependency from 4.3.0 to 4.2.0
* feat: Implement traffic and memory data monitoring service
- Added `TrafficService` and `TrafficManager` to manage traffic and memory data collection.
- Introduced commands to get traffic and memory data, start and stop the traffic service.
- Integrated IPC calls for traffic and memory data retrieval in the frontend.
- Updated `AppDataProvider` and `EnhancedTrafficStats` components to utilize new data fetching methods.
- Removed WebSocket connections for traffic and memory data, replaced with IPC polling.
- Added logging for better traceability of data fetching and service status.
* refactor: unify external controller handling and improve IPC path resolution
* fix: replace direct IPC path retrieval with guard function for external controller
* fix: convert external controller IPC path to string for proper insertion in config map
* fix: update dependencies and improve IPC response handling
* fix: remove unnecessary unix conditional for ipc path import
* Refactor traffic and memory monitoring to use IPC stream; remove TrafficService and TrafficManager. Introduce new IPC-based data retrieval methods for traffic and memory, including formatted data and system overview. Update frontend components to utilize new APIs for enhanced data display and management.
* chore: bump crate rand version to 0.9.2
* feat: Implement enhanced traffic monitoring system with data compression and sampling
- Introduced `useTrafficMonitorEnhanced` hook for advanced traffic data management.
- Added `TrafficDataSampler` class for handling raw and compressed traffic data.
- Implemented reference counting to manage data collection based on component usage.
- Enhanced data validation with `SystemMonitorValidator` for API responses.
- Created diagnostic tools for monitoring performance and error tracking.
- Updated existing hooks to utilize the new enhanced monitoring features.
- Added utility functions for generating and formatting diagnostic reports.
* feat(ipc): improve URL encoding and error handling for IPC requests
- Add percent-encoding for URL paths to handle special characters properly
- Enhance error handling in update_proxy with proper logging
- Remove excessive debug logging to reduce noise
- Update kode-bridge dependency to v0.1.5
- Fix JSON parsing error handling in PUT requests
Changes include:
- Proper URL encoding for connection IDs, proxy names, and test URLs
- Enhanced error handling with fallback responses in updateProxy
- Comment out verbose debug logs in traffic monitoring and data validation
- Update dependency version for improved IPC functionality
* feat: major improvements in architecture, traffic monitoring, and data validation
* Refactor traffic graph components: Replace EnhancedTrafficGraph with EnhancedCanvasTrafficGraph, improve rendering performance, and enhance visual elements. Remove deprecated code and ensure compatibility with global data management.
* chore: update UPDATELOG.md for v2.4.0 release, refine traffic monitoring system details, and enhance IPC functionality
* chore: update UPDATELOG.md to reflect removal of deprecated MihomoManager and unify IPC control
* refactor: remove global traffic service testing method from cmds.ts
* Update src/components/home/enhanced-canvas-traffic-graph.tsx
* Update src/hooks/use-traffic-monitor-enhanced.ts
* Update src/components/layout/layout-traffic.tsx
* refactor: remove debug state management from LayoutTraffic component
---------
This commit is contained in:
@@ -1,7 +1,5 @@
|
||||
use super::CmdResult;
|
||||
use crate::{
|
||||
config::*, core::*, feat, module::mihomo::MihomoManager, process::AsyncHandler, wrap_err,
|
||||
};
|
||||
use crate::{config::*, core::*, feat, ipc::IpcManager, process::AsyncHandler, wrap_err};
|
||||
use serde_yaml::Mapping;
|
||||
|
||||
/// 复制Clash环境变量
|
||||
@@ -90,9 +88,11 @@ pub async fn clash_api_get_proxy_delay(
|
||||
url: Option<String>,
|
||||
timeout: i32,
|
||||
) -> CmdResult<serde_json::Value> {
|
||||
MihomoManager::global()
|
||||
.test_proxy_delay(&name, url, timeout)
|
||||
.await
|
||||
wrap_err!(
|
||||
IpcManager::global()
|
||||
.test_proxy_delay(&name, url, timeout)
|
||||
.await
|
||||
)
|
||||
}
|
||||
|
||||
/// 测试URL延迟
|
||||
@@ -267,3 +267,273 @@ pub async fn validate_dns_config() -> CmdResult<(bool, String)> {
|
||||
Err(e) => Err(e.to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
/// 获取Clash版本信息
|
||||
#[tauri::command]
|
||||
pub async fn get_clash_version() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_version().await)
|
||||
}
|
||||
|
||||
/// 获取Clash配置
|
||||
#[tauri::command]
|
||||
pub async fn get_clash_config() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_config().await)
|
||||
}
|
||||
|
||||
/// 更新地理数据
|
||||
#[tauri::command]
|
||||
pub async fn update_geo_data() -> CmdResult {
|
||||
wrap_err!(IpcManager::global().update_geo_data().await)
|
||||
}
|
||||
|
||||
/// 升级Clash核心
|
||||
#[tauri::command]
|
||||
pub async fn upgrade_clash_core() -> CmdResult {
|
||||
wrap_err!(IpcManager::global().upgrade_core().await)
|
||||
}
|
||||
|
||||
/// 获取规则
|
||||
#[tauri::command]
|
||||
pub async fn get_clash_rules() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_rules().await)
|
||||
}
|
||||
|
||||
/// 更新代理选择
|
||||
#[tauri::command]
|
||||
pub async fn update_proxy_choice(group: String, proxy: String) -> CmdResult {
|
||||
wrap_err!(IpcManager::global().update_proxy(&group, &proxy).await)
|
||||
}
|
||||
|
||||
/// 获取代理提供者
|
||||
#[tauri::command]
|
||||
pub async fn get_proxy_providers() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_providers_proxies().await)
|
||||
}
|
||||
|
||||
/// 获取规则提供者
|
||||
#[tauri::command]
|
||||
pub async fn get_rule_providers() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_rule_providers().await)
|
||||
}
|
||||
|
||||
/// 代理提供者健康检查
|
||||
#[tauri::command]
|
||||
pub async fn proxy_provider_health_check(name: String) -> CmdResult {
|
||||
wrap_err!(
|
||||
IpcManager::global()
|
||||
.proxy_provider_health_check(&name)
|
||||
.await
|
||||
)
|
||||
}
|
||||
|
||||
/// 更新代理提供者
|
||||
#[tauri::command]
|
||||
pub async fn update_proxy_provider(name: String) -> CmdResult {
|
||||
wrap_err!(IpcManager::global().update_proxy_provider(&name).await)
|
||||
}
|
||||
|
||||
/// 更新规则提供者
|
||||
#[tauri::command]
|
||||
pub async fn update_rule_provider(name: String) -> CmdResult {
|
||||
wrap_err!(IpcManager::global().update_rule_provider(&name).await)
|
||||
}
|
||||
|
||||
/// 获取连接
|
||||
#[tauri::command]
|
||||
pub async fn get_clash_connections() -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(IpcManager::global().get_connections().await)
|
||||
}
|
||||
|
||||
/// 删除连接
|
||||
#[tauri::command]
|
||||
pub async fn delete_clash_connection(id: String) -> CmdResult {
|
||||
wrap_err!(IpcManager::global().delete_connection(&id).await)
|
||||
}
|
||||
|
||||
/// 关闭所有连接
|
||||
#[tauri::command]
|
||||
pub async fn close_all_clash_connections() -> CmdResult {
|
||||
wrap_err!(IpcManager::global().close_all_connections().await)
|
||||
}
|
||||
|
||||
/// 获取流量数据 (使用新的IPC流式监控)
|
||||
#[tauri::command]
|
||||
pub async fn get_traffic_data() -> CmdResult<serde_json::Value> {
|
||||
log::info!(target: "app", "开始获取流量数据 (IPC流式)");
|
||||
let traffic = crate::ipc::get_current_traffic().await;
|
||||
let result = serde_json::json!({
|
||||
"up": traffic.total_up,
|
||||
"down": traffic.total_down,
|
||||
"up_rate": traffic.up_rate,
|
||||
"down_rate": traffic.down_rate,
|
||||
"last_updated": traffic.last_updated.elapsed().as_secs()
|
||||
});
|
||||
log::info!(target: "app", "获取流量数据结果: up={}, down={}, up_rate={}, down_rate={}",
|
||||
traffic.total_up, traffic.total_down, traffic.up_rate, traffic.down_rate);
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// 获取内存数据 (使用新的IPC流式监控)
|
||||
#[tauri::command]
|
||||
pub async fn get_memory_data() -> CmdResult<serde_json::Value> {
|
||||
log::info!(target: "app", "开始获取内存数据 (IPC流式)");
|
||||
let memory = crate::ipc::get_current_memory().await;
|
||||
let usage_percent = if memory.oslimit > 0 {
|
||||
(memory.inuse as f64 / memory.oslimit as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
let result = serde_json::json!({
|
||||
"inuse": memory.inuse,
|
||||
"oslimit": memory.oslimit,
|
||||
"usage_percent": usage_percent,
|
||||
"last_updated": memory.last_updated.elapsed().as_secs()
|
||||
});
|
||||
log::info!(target: "app", "获取内存数据结果: inuse={}, oslimit={}, usage={}%",
|
||||
memory.inuse, memory.oslimit, usage_percent);
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// 启动流量监控服务 (IPC流式监控自动启动,此函数为兼容性保留)
|
||||
#[tauri::command]
|
||||
pub async fn start_traffic_service() -> CmdResult {
|
||||
log::info!(target: "app", "启动流量监控服务 (IPC流式监控)");
|
||||
// 新的IPC监控在首次访问时自动启动
|
||||
// 触发一次访问以确保监控器已初始化
|
||||
let _ = crate::ipc::get_current_traffic().await;
|
||||
let _ = crate::ipc::get_current_memory().await;
|
||||
log::info!(target: "app", "IPC流式监控已激活");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// 停止流量监控服务 (IPC流式监控无需显式停止,此函数为兼容性保留)
|
||||
#[tauri::command]
|
||||
pub async fn stop_traffic_service() -> CmdResult {
|
||||
log::info!(target: "app", "停止流量监控服务请求 (IPC流式监控)");
|
||||
// 新的IPC监控是持久的,无需显式停止
|
||||
log::info!(target: "app", "IPC流式监控继续运行");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// 获取格式化的流量数据 (包含单位,便于前端显示)
|
||||
#[tauri::command]
|
||||
pub async fn get_formatted_traffic_data() -> CmdResult<serde_json::Value> {
|
||||
log::info!(target: "app", "获取格式化流量数据");
|
||||
let (up_rate, down_rate, total_up, total_down, is_fresh) =
|
||||
crate::ipc::get_formatted_traffic().await;
|
||||
let result = serde_json::json!({
|
||||
"up_rate_formatted": up_rate,
|
||||
"down_rate_formatted": down_rate,
|
||||
"total_up_formatted": total_up,
|
||||
"total_down_formatted": total_down,
|
||||
"is_fresh": is_fresh
|
||||
});
|
||||
log::debug!(target: "app", "格式化流量数据: ↑{up_rate}/s ↓{down_rate}/s (总计: ↑{total_up} ↓{total_down})");
|
||||
// Clippy: variables can be used directly in the format string
|
||||
// log::debug!(target: "app", "格式化流量数据: ↑{up_rate}/s ↓{down_rate}/s (总计: ↑{total_up} ↓{total_down})");
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// 获取格式化的内存数据 (包含单位,便于前端显示)
|
||||
#[tauri::command]
|
||||
pub async fn get_formatted_memory_data() -> CmdResult<serde_json::Value> {
|
||||
log::info!(target: "app", "获取格式化内存数据");
|
||||
let (inuse, oslimit, usage_percent, is_fresh) = crate::ipc::get_formatted_memory().await;
|
||||
let result = serde_json::json!({
|
||||
"inuse_formatted": inuse,
|
||||
"oslimit_formatted": oslimit,
|
||||
"usage_percent": usage_percent,
|
||||
"is_fresh": is_fresh
|
||||
});
|
||||
log::debug!(target: "app", "格式化内存数据: {inuse} / {oslimit} ({usage_percent:.1}%)");
|
||||
// Clippy: variables can be used directly in the format string
|
||||
// log::debug!(target: "app", "格式化内存数据: {inuse} / {oslimit} ({usage_percent:.1}%)");
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// 获取系统监控概览 (流量+内存,便于前端一次性获取所有状态)
|
||||
#[tauri::command]
|
||||
pub async fn get_system_monitor_overview() -> CmdResult<serde_json::Value> {
|
||||
log::debug!(target: "app", "获取系统监控概览");
|
||||
|
||||
// 并发获取流量和内存数据
|
||||
let (traffic, memory) = tokio::join!(
|
||||
crate::ipc::get_current_traffic(),
|
||||
crate::ipc::get_current_memory()
|
||||
);
|
||||
|
||||
let (traffic_formatted, memory_formatted) = tokio::join!(
|
||||
crate::ipc::get_formatted_traffic(),
|
||||
crate::ipc::get_formatted_memory()
|
||||
);
|
||||
|
||||
let traffic_is_fresh = traffic.last_updated.elapsed().as_secs() < 5;
|
||||
let memory_is_fresh = memory.last_updated.elapsed().as_secs() < 10;
|
||||
|
||||
let result = serde_json::json!({
|
||||
"traffic": {
|
||||
"raw": {
|
||||
"up": traffic.total_up,
|
||||
"down": traffic.total_down,
|
||||
"up_rate": traffic.up_rate,
|
||||
"down_rate": traffic.down_rate
|
||||
},
|
||||
"formatted": {
|
||||
"up_rate": traffic_formatted.0,
|
||||
"down_rate": traffic_formatted.1,
|
||||
"total_up": traffic_formatted.2,
|
||||
"total_down": traffic_formatted.3
|
||||
},
|
||||
"is_fresh": traffic_is_fresh
|
||||
},
|
||||
"memory": {
|
||||
"raw": {
|
||||
"inuse": memory.inuse,
|
||||
"oslimit": memory.oslimit,
|
||||
"usage_percent": if memory.oslimit > 0 {
|
||||
(memory.inuse as f64 / memory.oslimit as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
}
|
||||
},
|
||||
"formatted": {
|
||||
"inuse": memory_formatted.0,
|
||||
"oslimit": memory_formatted.1,
|
||||
"usage_percent": memory_formatted.2
|
||||
},
|
||||
"is_fresh": memory_is_fresh
|
||||
},
|
||||
"overall_status": if traffic_is_fresh && memory_is_fresh { "healthy" } else { "stale" }
|
||||
});
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// 获取代理组延迟
|
||||
#[tauri::command]
|
||||
pub async fn get_group_proxy_delays(
|
||||
group_name: String,
|
||||
url: Option<String>,
|
||||
timeout: Option<i32>,
|
||||
) -> CmdResult<serde_json::Value> {
|
||||
wrap_err!(
|
||||
IpcManager::global()
|
||||
.get_group_proxy_delays(&group_name, url, timeout.unwrap_or(10000))
|
||||
.await
|
||||
)
|
||||
}
|
||||
|
||||
/// 检查调试是否启用
|
||||
#[tauri::command]
|
||||
pub async fn is_clash_debug_enabled() -> CmdResult<bool> {
|
||||
match IpcManager::global().is_debug_enabled().await {
|
||||
Ok(enabled) => Ok(enabled),
|
||||
Err(_) => Ok(false),
|
||||
}
|
||||
}
|
||||
|
||||
/// 垃圾回收
|
||||
#[tauri::command]
|
||||
pub async fn clash_gc() -> CmdResult {
|
||||
wrap_err!(IpcManager::global().gc().await)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user