Explorar o código

新增拓扑节点数据与分组配置查询能力

tanglin hai 3 semanas
pai
achega
5e075d9cc2

+ 165 - 26
README.md

@@ -23,7 +23,9 @@
 - 一个可访问的 `sys_config` 表
 - 一个可访问的 `sys_config` 表
 - `DATABASE_URL` 指向包含 `sys_config` 的数据库
 - `DATABASE_URL` 指向包含 `sys_config` 的数据库
 
 
-Windows 下建议固定使用 `uv` 的 Python 3.13 环境,避免 `pywin32` 在 Python 3.14 下的兼容问题。
+支持 Python `3.11` 到 `3.13`。
+
+Windows 下建议优先使用 `uv` 的 Python `3.13` 环境,避免 `pywin32` 在 Python `3.14` 下的兼容问题。
 
 
 默认数据库地址:
 默认数据库地址:
 
 
@@ -61,9 +63,11 @@ UPSTREAM_REQUEST_TIMEOUT=60
 在项目根目录执行:
 在项目根目录执行:
 
 
 ```bash
 ```bash
-uv sync --python 3.13
+uv sync --python 3.11
 ```
 ```
 
 
+也可以使用 `3.12` 或 `3.13`,例如:`uv sync --python 3.12`。
+
 ## 启动
 ## 启动
 
 
 默认以 HTTP 模式启动,默认地址:`http://127.0.0.1:8500/mcp`
 默认以 HTTP 模式启动,默认地址:`http://127.0.0.1:8500/mcp`
@@ -71,13 +75,13 @@ uv sync --python 3.13
 方式一:使用模块启动
 方式一:使用模块启动
 
 
 ```bash
 ```bash
-uv run --python 3.13 python -m instrument_config_mcp
+uv run --python 3.11 python -m instrument_config_mcp
 ```
 ```
 
 
 方式二:使用 console script 启动
 方式二:使用 console script 启动
 
 
 ```bash
 ```bash
-uv run --python 3.13 instrument-config-mcp
+uv run --python 3.11 instrument-config-mcp
 ```
 ```
 
 
 可选环境变量:
 可选环境变量:
@@ -90,7 +94,7 @@ uv run --python 3.13 instrument-config-mcp
 
 
 ```bash
 ```bash
 set DATABASE_URL=sqlite:///llm_proxy.db
 set DATABASE_URL=sqlite:///llm_proxy.db
-uv run --python 3.13 python -m instrument_config_mcp
+uv run --python 3.11 python -m instrument_config_mcp
 ```
 ```
 
 
 如果需要指定 HTTP 地址:
 如果需要指定 HTTP 地址:
@@ -99,16 +103,28 @@ uv run --python 3.13 python -m instrument_config_mcp
 set MCP_HOST=0.0.0.0
 set MCP_HOST=0.0.0.0
 set MCP_PORT=8500
 set MCP_PORT=8500
 set MCP_PATH=/mcp
 set MCP_PATH=/mcp
-uv run --python 3.13 python -m instrument_config_mcp
+uv run --python 3.11 python -m instrument_config_mcp
 ```
 ```
 
 
 如果需要指定上游超时:
 如果需要指定上游超时:
 
 
 ```bash
 ```bash
 set UPSTREAM_REQUEST_TIMEOUT=60
 set UPSTREAM_REQUEST_TIMEOUT=60
-uv run --python 3.13 python -m instrument_config_mcp
+uv run --python 3.11 python -m instrument_config_mcp
 ```
 ```
 
 
+## OpenCode 配置
+
+仓库根目录的 `opencode.jsonc` 是项目默认的 OpenCode MCP 接入配置,可以提交到版本库。
+
+默认约定:
+
+- 本地 MCP 服务监听地址为 `http://127.0.0.1:8500/mcp`
+- MCP 名称为 `instrument_config`
+- 当前配置不包含账号、密码、token 等敏感信息
+
+如果你的本地端口或接入方式不同,可以在本地自行调整该配置。
+
 ## MCP 工具
 ## MCP 工具
 
 
 当前提供以下工具:
 当前提供以下工具:
@@ -122,12 +138,13 @@ uv run --python 3.13 python -m instrument_config_mcp
 - `search_devices`
 - `search_devices`
 - `search_meters`
 - `search_meters`
 - `search_points`
 - `search_points`
-- `topology_group_list`
-- `topology_list`
-- `topology_get_node`
-- `topology_find_context`
+- `topology.group_list`
+- `topology.list`
+- `topology.get_node`
+- `topology.get_group_config`
+- `topology.find_context`
 
 
-除 `project.list` 外,其他工具都必须传 `project_key`。
+除 `project.list` 和 `version.get` 外,其他工具都必须传 `project_key`。
 
 
 ## 拓扑能力概览
 ## 拓扑能力概览
 
 
@@ -157,17 +174,18 @@ uv run --python 3.13 python -m instrument_config_mcp
 
 
 ## 拓扑上游接口
 ## 拓扑上游接口
 
 
-当前代码实际使用的上游接口只有两个,均定义在 `instrument_config_mcp/config_api.py`:
+当前代码实际使用的上游接口有三个,均定义在 `instrument_config_mcp/config_api.py`:
 
 
 | Python function | Upstream API | Purpose |
 | Python function | Upstream API | Purpose |
 | --- | --- | --- |
 | --- | --- | --- |
 | `list_topologies_with_group(project_key, group_ids=None)` | `POST /api/configapi/topo/list_with_group` | 拉取拓扑分组与拓扑列表 |
 | `list_topologies_with_group(project_key, group_ids=None)` | `POST /api/configapi/topo/list_with_group` | 拉取拓扑分组与拓扑列表 |
 | `get_topology(project_key, id)` | `POST /api/configapi/topo/get` | 拉取单张拓扑详情 |
 | `get_topology(project_key, id)` | `POST /api/configapi/topo/get` | 拉取单张拓扑详情 |
+| `get_topology_data(project_key, id, display, accu_step=None, ts=None)` | `POST /api/configapi/topo/get_data` | 拉取单张拓扑的节点实时值或累计值 |
 
 
 说明:
 说明:
 
 
-- 当前没有实现 `topo/get_data` 的封装与使用
 - 当前缓存设计围绕“结构查询”和“异常定位”场景,没有引入展示层时序数据缓存
 - 当前缓存设计围绕“结构查询”和“异常定位”场景,没有引入展示层时序数据缓存
+- `topo/get_data` 不写入本地缓存,而是在 `topology.get_node` 查询时实时拉取
 
 
 ## 拓扑缓存刷新
 ## 拓扑缓存刷新
 
 
@@ -287,6 +305,8 @@ uv run --python 3.13 python -m instrument_config_mcp
 - `object_type_code`
 - `object_type_code`
 - `group_id`
 - `group_id`
 - `root_shape`
 - `root_shape`
+- `data_options_json`
+- `dimension_config_json`
 - `source_updated_time`
 - `source_updated_time`
 - `refreshed_at`
 - `refreshed_at`
 - `is_active`
 - `is_active`
@@ -367,7 +387,7 @@ Schema 说明:
 
 
 ## 拓扑工具说明
 ## 拓扑工具说明
 
 
-### `topology_group_list(project_key)`
+### `topology.group_list(project_key)`
 
 
 用途:
 用途:
 
 
@@ -379,7 +399,7 @@ Schema 说明:
 - `groups`
 - `groups`
 - `total`
 - `total`
 
 
-### `topology_list(project_key, group_id=None, object_type_code=None)`
+### `topology.list(project_key, group_id=None, object_type_code=None)`
 
 
 用途:
 用途:
 
 
@@ -403,7 +423,7 @@ Schema 说明:
 - `root_shape`
 - `root_shape`
 - `refreshed_at`
 - `refreshed_at`
 
 
-### `topology_get_node(project_key, topology_id, node_id='root', include_siblings=True, include_children=True)`
+### `topology.get_node(project_key, topology_id, node_id='root', include_siblings=True, include_children=True)`
 
 
 用途:
 用途:
 
 
@@ -419,13 +439,105 @@ Schema 说明:
 
 
 输出结构:
 输出结构:
 
 
+- `data_window`
 - `topology`
 - `topology`
 - `node`
 - `node`
 - `parents`
 - `parents`
 - `children`
 - `children`
 - `siblings`
 - `siblings`
 
 
-### `topology_find_context(project_key, entity_type, entity_id, topology_id=None, include_siblings=True, ancestor_depth=5, descendant_depth=2)`
+其中:
+
+- `data_window.hourly_ts` 表示“当前小时之前”的 12 个小时整点时间戳
+- `data_window.daily_ts` 表示“当日包含当天”的 7 个日零点时间戳
+- `topology` 当前除了基础元信息外,还包含 `data_options`、`metric_definitions`、`dimension_config`
+- `metric_definitions.instant` 和 `metric_definitions.accu` 是按指标 `code` 建立的解释字典,供 AI 或调用方把节点值里的 key 直接映射到中文名称、单位等定义
+
+每个节点对象当前包含:
+
+- `node_id`
+- `node_name`
+- `level`
+- `parent_node_id`
+- `refer_id`
+- `refer_level`
+- `is_virtual`
+- `path_text`
+- `child_count`
+- `data`
+
+其中 `data` 的结构为:
+
+- `data.instant`
+- `data.accu.hourly`
+- `data.accu.daily`
+
+说明:
+
+- `data.instant` 是当前节点的实时值对象,例如 `{ "E": 0 }`
+- `data.accu.hourly` 是长度固定为 12 的数组,每项包含 `ts` 和 `values`
+- `data.accu.daily` 是长度固定为 7 的数组,每项包含 `ts` 和 `values`
+- `values` 中的 key 需要结合 `topology.metric_definitions.accu` 解释
+- 如果某节点在某个时间点没有值,则该时间点返回 `"values": null`
+
+### `topology.get_group_config(project_key, topology_id)`
+
+用途:
+
+- 返回 type=2 拓扑图的分组配置和筛选配置
+- 配置来源是该拓扑详情中的 `dimension_config`
+
+输入约束:
+
+- 仅适用于 `topology_type=2`
+- 如果目标拓扑不是 type=2,会返回 `supported=false`
+- `type=17`(仪表型号)的筛选项当前没有名称解析规则,遇到该类型会显式报错等待补充规则
+
+输出结构:
+
+- `supported`
+- `raw_dimension_config`
+- `groupings`
+- `filter`
+
+其中:
+
+- `raw_dimension_config` 保留上游原始 `dimension_config`
+- `groupings` 是对 `dimensions[]` 的结构化解释
+- `filter.conditions[].field_items` 是把 `fields` 里的业务 ID 解析成轻量的 `id + name`
+
+当前枚举解释:
+
+- `order`: `1=顺序`,`2=逆序`
+- `filter_type`: `1=所有`,`2=任一`
+- `match_type`: `1=等于`,`2=不等于`,`3=包含`,`4=不包含`
+
+当前对象类型解释:
+
+- `11=位置`
+- `12=系统类型`
+- `13=系统`
+- `14=设备类型`
+- `15=设备`
+- `16=仪表类型`
+- `17=仪表型号`
+- `18=仪表`
+- `19=拓扑图分组`
+- `20=拓扑图`
+
+`field_items` 名称解析来源:
+
+- `11` 位置:`list_locations`
+- `12` 系统类型:`list_system_tree`
+- `13` 系统:`list_systems`
+- `14` 设备类型:`list_device_types`
+- `15` 设备:`search_devices`
+- `16` 仪表类型:`list_meter_types`
+- `18` 仪表:`search_meters`
+- `19` 拓扑图分组:本地拓扑缓存
+- `20` 拓扑图:本地拓扑缓存
+
+### `topology.find_context(project_key, entity_type, entity_id, topology_id=None, include_siblings=True, ancestor_depth=5, descendant_depth=2)`
 
 
 用途:
 用途:
 
 
@@ -478,8 +590,35 @@ Schema 说明:
 
 
 - 当 `node_id='root'` 时,会先自动解析当前拓扑的根节点,再返回该根节点的直接邻域
 - 当 `node_id='root'` 时,会先自动解析当前拓扑的根节点,再返回该根节点的直接邻域
 
 
+此外,它还会实时拉取当前拓扑的节点数据:
+
+- 1 次 `display=instant`
+- 12 次 `display=accu, accu_step=2`
+- 7 次 `display=accu, accu_step=3`
+
+也就是说,单次 `topology.get_node` 会附带:
+
+- 实时值
+- 最近 12 个整点小时累计值
+- 最近 7 个日零点累计值
+
 它不做深层遍历。
 它不做深层遍历。
 
 
+### 分组配置查询
+
+`topology.get_group_config` 会:
+
+1. 从拓扑缓存读取目标拓扑的 `dimension_config`
+2. 把 `dimensions` 解释成分组配置 `groupings`
+3. 把 `filter` 解释成筛选配置
+4. 根据 `condition.type` 解析 `fields` 中各业务 ID 的名称
+
+它不会:
+
+- 修改缓存
+- 拉取节点时序数据
+- 猜测未知类型的名称解析规则
+
 ### 实体上下文查询
 ### 实体上下文查询
 
 
 `topology.find_context` 会:
 `topology.find_context` 会:
@@ -574,37 +713,37 @@ Schema 说明:
 示例 1:查询位置第一页
 示例 1:查询位置第一页
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py list-locations --project-key dev-01 --keyword F1
+uv run --python 3.11 python scripts/smoke_test.py list-locations --project-key dev-01 --keyword F1
 ```
 ```
 
 
 示例 2:查询系统树
 示例 2:查询系统树
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py list-system-tree --project-key dev-01
+uv run --python 3.11 python scripts/smoke_test.py list-system-tree --project-key dev-01
 ```
 ```
 
 
 示例 3:查询设备类型
 示例 3:查询设备类型
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py list-device-types --project-key dev-01
+uv run --python 3.11 python scripts/smoke_test.py list-device-types --project-key dev-01
 ```
 ```
 
 
 示例 4:按位置搜索仪表
 示例 4:按位置搜索仪表
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py search-meters --project-key dev-01 --location-id 162 --show-below true --page-num 1
+uv run --python 3.11 python scripts/smoke_test.py search-meters --project-key dev-01 --location-id 162 --show-below true --page-num 1
 ```
 ```
 
 
 示例 5:按系统和设备类型搜索设备
 示例 5:按系统和设备类型搜索设备
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py search-devices --project-key dev-01 --system-ids 21 --device-type-ids 5
+uv run --python 3.11 python scripts/smoke_test.py search-devices --project-key dev-01 --system-ids 21 --device-type-ids 5
 ```
 ```
 
 
 示例 6:查询某个仪表下的点位
 示例 6:查询某个仪表下的点位
 
 
 ```bash
 ```bash
-uv run --python 3.13 python scripts/smoke_test.py search-points --project-key dev-01 --id 1785 --page-num 1
+uv run --python 3.11 python scripts/smoke_test.py search-points --project-key dev-01 --id 1785 --page-num 1
 ```
 ```
 
 
 ## 联调建议顺序
 ## 联调建议顺序
@@ -633,7 +772,7 @@ uv run --python 3.13 python scripts/smoke_test.py search-points --project-key de
 1. 本地 MCP 服务已经启动,例如:
 1. 本地 MCP 服务已经启动,例如:
 
 
 ```powershell
 ```powershell
-uv run --python 3.13 python -m instrument_config_mcp
+uv run --python 3.11 python -m instrument_config_mcp
 ```
 ```
 
 
 默认地址:`http://127.0.0.1:8500/mcp`
 默认地址:`http://127.0.0.1:8500/mcp`
@@ -806,5 +945,5 @@ opencode run ...
 语法校验:
 语法校验:
 
 
 ```bash
 ```bash
-uv run --python 3.13 python -m compileall .
+uv run --python 3.11 python -m compileall .
 ```
 ```

+ 3 - 1
instrument_config_mcp/__init__.py

@@ -1 +1,3 @@
-__all__ = []
+__version__ = "1.0.0"
+
+__all__ = ["__version__"]

+ 19 - 0
instrument_config_mcp/config_api.py

@@ -193,3 +193,22 @@ def get_topology(project_key: str, id: int) -> Any:
             "id": id,
             "id": id,
         },
         },
     )
     )
+
+
+def get_topology_data(
+    project_key: str,
+    id: int,
+    display: str,
+    accu_step: int | None = None,
+    ts: int | None = None,
+) -> Any:
+    payload: dict[str, Any] = {
+        "operator": CONFIG_OPERATOR,
+        "id": id,
+        "display": str(display or "").strip(),
+    }
+    if accu_step is not None:
+        payload["accu_step"] = accu_step
+    if ts is not None:
+        payload["ts"] = ts
+    return _post_config(project_key, "/api/configapi/topo/get_data", payload)

+ 17 - 0
instrument_config_mcp/server.py

@@ -9,6 +9,7 @@ from pydantic import Field
 from starlette.requests import Request
 from starlette.requests import Request
 from starlette.responses import JSONResponse, Response
 from starlette.responses import JSONResponse, Response
 
 
+from . import __version__
 from .auth import load_projects_config
 from .auth import load_projects_config
 from .config_api import (
 from .config_api import (
     list_device_types as api_list_device_types,
     list_device_types as api_list_device_types,
@@ -22,6 +23,7 @@ from .config_api import (
 )
 )
 from .topology_cache import (
 from .topology_cache import (
     find_topology_context,
     find_topology_context,
+    get_topology_group_config,
     get_topology_node,
     get_topology_node,
     list_topologies,
     list_topologies,
     list_topology_groups,
     list_topology_groups,
@@ -32,6 +34,15 @@ from .topology_cache import (
 mcp = FastMCP("instrument-config")
 mcp = FastMCP("instrument-config")
 
 
 
 
+@mcp.tool(name="version.get")
+def get_version() -> dict[str, str]:
+    """Get the MCP project version."""
+    return {
+        "name": "instrument-config-mcp",
+        "version": __version__,
+    }
+
+
 @mcp.tool(
 @mcp.tool(
     name="project.list",
     name="project.list",
     title="Project List",
     title="Project List",
@@ -269,6 +280,12 @@ def topology_get_node(
     )
     )
 
 
 
 
+@mcp.tool(name="topology.get_group_config")
+def topology_get_group_config(project_key: str, topology_id: int) -> dict[str, Any]:
+    """Get grouping and filter configuration derived from topology dimension_config."""
+    return get_topology_group_config(project_key, topology_id)
+
+
 @mcp.tool(name="topology.find_context")
 @mcp.tool(name="topology.find_context")
 def topology_find_context(
 def topology_find_context(
     project_key: str,
     project_key: str,

+ 530 - 5
instrument_config_mcp/topology_cache.py

@@ -1,21 +1,74 @@
 from __future__ import annotations
 from __future__ import annotations
 
 
 from collections import defaultdict, deque
 from collections import defaultdict, deque
+from concurrent.futures import ThreadPoolExecutor
 from datetime import datetime, timezone
 from datetime import datetime, timezone
+import json
 from typing import Any
 from typing import Any
 
 
-from sqlalchemy import Index, Integer, String, Text, UniqueConstraint, delete, select
+from sqlalchemy import (
+    Index,
+    Integer,
+    String,
+    Text,
+    UniqueConstraint,
+    delete,
+    inspect,
+    select,
+    text,
+)
 from sqlalchemy.orm import Mapped, Session, mapped_column
 from sqlalchemy.orm import Mapped, Session, mapped_column
 
 
 from .config_api import list_topologies_with_group as api_list_topologies_with_group
 from .config_api import list_topologies_with_group as api_list_topologies_with_group
+from .config_api import list_locations as api_list_locations
+from .config_api import list_system_tree as api_list_system_tree
+from .config_api import list_systems as api_list_systems
+from .config_api import list_device_types as api_list_device_types
+from .config_api import list_meter_types as api_list_meter_types
+from .config_api import search_devices as api_search_devices
+from .config_api import search_meters as api_search_meters
+from .config_api import get_topology_data as api_get_topology_data
 from .config_api import get_topology as api_get_topology
 from .config_api import get_topology as api_get_topology
 from .db import Base, sql_engine
 from .db import Base, sql_engine
 
 
 
 
+PT_OBJ_TYPE_LOCATION = 11
+PT_OBJ_TYPE_SYSTEMTYPE = 12
+PT_OBJ_TYPE_SYSTEM = 13
+PT_OBJ_TYPE_DEVICETYPE = 14
+PT_OBJ_TYPE_DEVICE = 15
+PT_OBJ_TYPE_METERTYPE = 16
+PT_OBJ_TYPE_METERMODEL = 17
+PT_OBJ_TYPE_METER = 18
+PT_OBJ_TYPE_TOPOGROUP = 19
+PT_OBJ_TYPE_TOPODIAGRAM = 20
+
+OBJECT_TYPE_LABELS = {
+    PT_OBJ_TYPE_LOCATION: "位置",
+    PT_OBJ_TYPE_SYSTEMTYPE: "系统类型",
+    PT_OBJ_TYPE_SYSTEM: "系统",
+    PT_OBJ_TYPE_DEVICETYPE: "设备类型",
+    PT_OBJ_TYPE_DEVICE: "设备",
+    PT_OBJ_TYPE_METERTYPE: "仪表类型",
+    PT_OBJ_TYPE_METERMODEL: "仪表型号",
+    PT_OBJ_TYPE_METER: "仪表",
+    PT_OBJ_TYPE_TOPOGROUP: "拓扑图分组",
+    PT_OBJ_TYPE_TOPODIAGRAM: "拓扑图",
+}
+
+ORDER_LABELS = {1: "顺序", 2: "逆序"}
+FILTER_TYPE_LABELS = {1: "所有", 2: "任一"}
+MATCH_TYPE_LABELS = {1: "等于", 2: "不等于", 3: "包含", 4: "不包含"}
+
+
 def _utc_now_iso() -> str:
 def _utc_now_iso() -> str:
     return datetime.now(timezone.utc).isoformat()
     return datetime.now(timezone.utc).isoformat()
 
 
 
 
+def _current_unix_ts() -> int:
+    return int(datetime.now().timestamp())
+
+
 def _safe_int(raw_value: Any) -> int | None:
 def _safe_int(raw_value: Any) -> int | None:
     if raw_value is None:
     if raw_value is None:
         return None
         return None
@@ -81,6 +134,10 @@ class TopologyRegistry(Base):
     source_updated_time: Mapped[str] = mapped_column(
     source_updated_time: Mapped[str] = mapped_column(
         String(64), nullable=False, default=""
         String(64), nullable=False, default=""
     )
     )
+    data_options_json: Mapped[str] = mapped_column(Text, nullable=False, default="")
+    dimension_config_json: Mapped[str] = mapped_column(
+        Text, nullable=False, default=""
+    )
     refreshed_at: Mapped[str] = mapped_column(String(64), nullable=False)
     refreshed_at: Mapped[str] = mapped_column(String(64), nullable=False)
     is_active: Mapped[int] = mapped_column(Integer, nullable=False, default=1)
     is_active: Mapped[int] = mapped_column(Integer, nullable=False, default=1)
 
 
@@ -192,6 +249,340 @@ class TopologyEntityIndex(Base):
 
 
 def ensure_topology_cache_tables() -> None:
 def ensure_topology_cache_tables() -> None:
     Base.metadata.create_all(sql_engine())
     Base.metadata.create_all(sql_engine())
+    engine = sql_engine()
+    topology_registry_columns = {
+        item["name"] for item in inspect(engine).get_columns("topology_registry")
+    }
+    with engine.begin() as connection:
+        if "data_options_json" not in topology_registry_columns:
+            connection.execute(
+                text("ALTER TABLE topology_registry ADD COLUMN data_options_json TEXT")
+            )
+        if "dimension_config_json" not in topology_registry_columns:
+            connection.execute(
+                text(
+                    "ALTER TABLE topology_registry ADD COLUMN dimension_config_json TEXT"
+                )
+            )
+
+
+def _dump_json_text(raw_value: Any) -> str:
+    if raw_value is None:
+        return ""
+    return json.dumps(raw_value, ensure_ascii=False, separators=(",", ":"))
+
+
+def _load_json_text(raw_value: str) -> Any:
+    text_value = _text(raw_value)
+    if not text_value:
+        return None
+    try:
+        return json.loads(text_value)
+    except json.JSONDecodeError:
+        return text_value
+
+
+def _build_metric_definitions(data_options: Any) -> dict[str, dict[str, Any]]:
+    if not isinstance(data_options, dict):
+        return {"instant": {}, "accu": {}}
+
+    result: dict[str, dict[str, Any]] = {"instant": {}, "accu": {}}
+    for display in ("instant", "accu"):
+        items = data_options.get(display)
+        if not isinstance(items, list):
+            continue
+        mapped: dict[str, Any] = {}
+        for item in items:
+            if not isinstance(item, dict):
+                continue
+            code = _text(item.get("code"))
+            if not code:
+                continue
+            mapped[code] = {
+                "name": _text(item.get("name")),
+                "unit": _text(item.get("unit")),
+                "type": _safe_int(item.get("type")),
+                "display": bool(item.get("display")),
+                "value_key": _text(item.get("value")),
+            }
+        result[display] = mapped
+    return result
+
+
+def _extract_page_items(payload: Any) -> list[dict[str, Any]]:
+    if not isinstance(payload, dict):
+        return []
+    data = payload.get("data")
+    if isinstance(data, dict):
+        items = data.get("data")
+        if isinstance(items, list):
+            return [item for item in items if isinstance(item, dict)]
+    if isinstance(data, list):
+        return [item for item in data if isinstance(item, dict)]
+    return []
+
+
+def _load_all_pages(fetch_page: Any) -> list[dict[str, Any]]:
+    page_num = 1
+    result: list[dict[str, Any]] = []
+    while True:
+        payload = fetch_page(page_num)
+        result.extend(_extract_page_items(payload))
+        data = payload.get("data") if isinstance(payload, dict) else None
+        if not isinstance(data, dict):
+            break
+        total_page = _safe_int(data.get("total_page")) or page_num
+        if page_num >= total_page:
+            break
+        page_num += 1
+    return result
+
+
+def _flatten_tree_items(items: list[dict[str, Any]]) -> list[dict[str, Any]]:
+    result: list[dict[str, Any]] = []
+    queue = deque(items)
+    while queue:
+        item = queue.popleft()
+        result.append(item)
+        children = item.get("children")
+        if isinstance(children, list):
+            queue.extend(child for child in children if isinstance(child, dict))
+    return result
+
+
+def _to_id_name_map(items: list[dict[str, Any]]) -> dict[int, str]:
+    result: dict[int, str] = {}
+    for item in items:
+        item_id = _safe_int(item.get("id"))
+        if item_id is None:
+            continue
+        result[item_id] = _text(item.get("name"))
+    return result
+
+
+def _load_location_name_map(project_key: str) -> dict[int, str]:
+    items = _load_all_pages(
+        lambda page_num: api_list_locations(
+            project_key, keyword="", page_size=100, page_num=page_num
+        )
+    )
+    return _to_id_name_map(_flatten_tree_items(items))
+
+
+def _load_system_type_name_map(project_key: str) -> dict[int, str]:
+    payload = api_list_system_tree(project_key)
+    items = _extract_page_items(payload)
+    return _to_id_name_map(
+        [item for item in _flatten_tree_items(items) if _safe_int(item.get("type")) == 1]
+    )
+
+
+def _load_system_name_map(project_key: str) -> dict[int, str]:
+    items = _load_all_pages(
+        lambda page_num: api_list_systems(
+            project_key,
+            page_size=100,
+            page_num=page_num,
+            system_type_id=0,
+            show_below=True,
+        )
+    )
+    return _to_id_name_map(items)
+
+
+def _load_device_type_name_map(project_key: str) -> dict[int, str]:
+    return _to_id_name_map(_extract_page_items(api_list_device_types(project_key)))
+
+
+def _load_meter_type_name_map(project_key: str) -> dict[int, str]:
+    return _to_id_name_map(_extract_page_items(api_list_meter_types(project_key)))
+
+
+def _load_device_name_map(project_key: str) -> dict[int, str]:
+    items = _load_all_pages(
+        lambda page_num: api_search_devices(
+            project_key,
+            page_size=100,
+            page_num=page_num,
+            keyword="",
+            location_id=0,
+            show_below=True,
+            system_ids=[],
+            device_type_ids=[],
+        )
+    )
+    return _to_id_name_map(items)
+
+
+def _load_meter_name_map(project_key: str) -> dict[int, str]:
+    items = _load_all_pages(
+        lambda page_num: api_search_meters(
+            project_key,
+            page_size=100,
+            page_num=page_num,
+            keyword="",
+            location_id=0,
+            show_below=True,
+            meter_type_id=0,
+            measurement_location_ids=[],
+            measurement_system_ids=[],
+            measurement_device_type_ids=[],
+            status=None,
+        )
+    )
+    return _to_id_name_map(items)
+
+
+def _load_topology_group_name_map(session: Session, project_key: str) -> dict[int, str]:
+    return {row.group_id: row.group_name for row in _load_group_rows(session, project_key)}
+
+
+def _load_topology_name_map(session: Session, project_key: str) -> dict[int, str]:
+    return {
+        row.topology_id: row.topology_name
+        for row in _load_registry_rows(session, project_key)
+    }
+
+
+def _object_type_label(type_code: int | None) -> str:
+    return OBJECT_TYPE_LABELS.get(type_code or 0, "")
+
+
+def _resolve_field_name_map(
+    session: Session, project_key: str, type_code: int
+) -> dict[int, str]:
+    if type_code == PT_OBJ_TYPE_LOCATION:
+        return _load_location_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_SYSTEMTYPE:
+        return _load_system_type_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_SYSTEM:
+        return _load_system_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_DEVICETYPE:
+        return _load_device_type_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_DEVICE:
+        return _load_device_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_METERTYPE:
+        return _load_meter_type_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_METERMODEL:
+        raise ValueError("type=17 (仪表型号) needs additional API mapping confirmation")
+    if type_code == PT_OBJ_TYPE_METER:
+        return _load_meter_name_map(project_key)
+    if type_code == PT_OBJ_TYPE_TOPOGROUP:
+        return _load_topology_group_name_map(session, project_key)
+    if type_code == PT_OBJ_TYPE_TOPODIAGRAM:
+        return _load_topology_name_map(session, project_key)
+    return {}
+
+
+def _resolve_field_items(
+    session: Session, project_key: str, type_code: int, field_ids: list[int]
+) -> list[dict[str, Any]]:
+    name_map = _resolve_field_name_map(session, project_key, type_code)
+    return [
+        {"id": field_id, "name": name_map.get(field_id, "")}
+        for field_id in field_ids
+    ]
+
+
+def _normalize_int_list(raw_value: Any) -> list[int]:
+    if not isinstance(raw_value, list):
+        return []
+    result: list[int] = []
+    for item in raw_value:
+        normalized = _safe_int(item)
+        if normalized is None:
+            continue
+        result.append(normalized)
+    return result
+
+
+def get_topology_group_config(project_key: str, topology_id: int) -> dict[str, Any]:
+    project_key = _text(project_key)
+    if not project_key:
+        raise ValueError("project_key is required")
+
+    ensure_topology_cache_tables()
+    with Session(sql_engine()) as session:
+        _require_topology_cache(session, project_key)
+        group_path_map = _group_path_map(_load_group_rows(session, project_key))
+        registry_row = _get_registry_or_error(session, project_key, topology_id)
+        if registry_row.topology_type != 2:
+            return {
+                "supported": False,
+                "raw_dimension_config": _load_json_text(
+                    registry_row.dimension_config_json
+                ),
+                "groupings": [],
+                "filter": {
+                    "filter_type": None,
+                    "filter_type_label": "",
+                    "conditions": [],
+                },
+                "mcp_note": "topology.get_group_config only applies to topology_type=2 topologies.",
+            }
+
+        raw_dimension_config = _load_json_text(registry_row.dimension_config_json)
+        if not isinstance(raw_dimension_config, dict):
+            raw_dimension_config = {}
+
+        raw_dimensions = raw_dimension_config.get("dimensions")
+        raw_filter = raw_dimension_config.get("filter")
+        dimensions = raw_dimensions if isinstance(raw_dimensions, list) else []
+        filter_payload = raw_filter if isinstance(raw_filter, dict) else {}
+
+        groupings: list[dict[str, Any]] = []
+        for item in dimensions:
+            if not isinstance(item, dict):
+                continue
+            type_code = _safe_int(item.get("type"))
+            level = _safe_int(item.get("level"))
+            order = _safe_int(item.get("order"))
+            groupings.append(
+                {
+                    "name": _text(item.get("name")),
+                    "type": type_code,
+                    "type_label": _object_type_label(type_code),
+                    "level": level,
+                    "order": order,
+                    "order_label": ORDER_LABELS.get(order or 0, ""),
+                }
+            )
+
+        conditions_payload = filter_payload.get("conditions")
+        conditions = conditions_payload if isinstance(conditions_payload, list) else []
+        resolved_conditions: list[dict[str, Any]] = []
+        for item in conditions:
+            if not isinstance(item, dict):
+                continue
+            type_code = _safe_int(item.get("type")) or 0
+            level = _safe_int(item.get("level"))
+            match_type = _safe_int(item.get("match_type"))
+            field_ids = _normalize_int_list(item.get("fields"))
+            resolved_conditions.append(
+                {
+                    "type": type_code,
+                    "type_label": _object_type_label(type_code),
+                    "level": level,
+                    "match_type": match_type,
+                    "match_type_label": MATCH_TYPE_LABELS.get(match_type or 0, ""),
+                    "fields": field_ids,
+                    "field_items": _resolve_field_items(
+                        session, project_key, type_code, field_ids
+                    ),
+                }
+            )
+
+        filter_type = _safe_int(filter_payload.get("filter_type"))
+        return {
+            "supported": True,
+            "raw_dimension_config": raw_dimension_config,
+            "groupings": groupings,
+            "filter": {
+                "filter_type": filter_type,
+                "filter_type_label": FILTER_TYPE_LABELS.get(filter_type or 0, ""),
+                "conditions": resolved_conditions,
+            },
+        }
 
 
 
 
 def _collect_group_and_topology_refs(
 def _collect_group_and_topology_refs(
@@ -616,6 +1007,10 @@ def refresh_topology_cache(
                 or topology_ref.get("group_id"),
                 or topology_ref.get("group_id"),
                 "root_shape": root_shape,
                 "root_shape": root_shape,
                 "source_updated_time": _text(detail_data.get("updated_time")),
                 "source_updated_time": _text(detail_data.get("updated_time")),
+                "data_options_json": _dump_json_text(detail_data.get("data_options")),
+                "dimension_config_json": _dump_json_text(
+                    detail_data.get("dimension_config")
+                ),
                 "refreshed_at": refreshed_at,
                 "refreshed_at": refreshed_at,
                 "is_active": 1,
                 "is_active": 1,
             }
             }
@@ -942,6 +1337,124 @@ def _node_payload(node_row: TopologyNode) -> dict[str, Any]:
     }
     }
 
 
 
 
+def _floor_hour_ts(ts: int) -> int:
+    return max(0, int(ts) - (int(ts) % 3600))
+
+
+def _floor_day_ts(ts: int) -> int:
+    current = datetime.fromtimestamp(int(ts)).astimezone()
+    return int(
+        current.replace(hour=0, minute=0, second=0, microsecond=0).timestamp()
+    )
+
+
+def _hourly_window_timestamps(base_ts: int) -> list[int]:
+    current_hour_ts = _floor_hour_ts(base_ts)
+    return [current_hour_ts - 3600 * offset for offset in range(1, 13)]
+
+
+def _daily_window_timestamps(base_ts: int) -> list[int]:
+    current_day_ts = _floor_day_ts(base_ts)
+    return [current_day_ts - 86400 * offset for offset in range(0, 7)]
+
+
+def _extract_topology_data_map(payload: Any) -> dict[str, Any]:
+    if not isinstance(payload, dict):
+        return {}
+    data = payload.get("data")
+    if not isinstance(data, dict):
+        return {}
+    return {str(node_id): node_values for node_id, node_values in data.items()}
+
+
+def _fetch_topology_runtime_data(
+    project_key: str, topology_id: int, *, base_ts: int | None = None
+) -> dict[str, Any]:
+    effective_base_ts = int(base_ts if base_ts is not None else _current_unix_ts())
+    hourly_timestamps = _hourly_window_timestamps(effective_base_ts)
+    daily_timestamps = _daily_window_timestamps(effective_base_ts)
+
+    def fetch_instant() -> dict[str, Any]:
+        return _extract_topology_data_map(
+            api_get_topology_data(project_key, topology_id, display="instant")
+        )
+
+    def fetch_accu(accu_step: int, ts: int) -> tuple[int, dict[str, Any]]:
+        payload = api_get_topology_data(
+            project_key,
+            topology_id,
+            display="accu",
+            accu_step=accu_step,
+            ts=ts,
+        )
+        return ts, _extract_topology_data_map(payload)
+
+    max_workers = max(1, min(8, 1 + len(hourly_timestamps) + len(daily_timestamps)))
+    with ThreadPoolExecutor(max_workers=max_workers) as executor:
+        instant_future = executor.submit(fetch_instant)
+        hourly_futures = [
+            executor.submit(fetch_accu, 2, ts_value) for ts_value in hourly_timestamps
+        ]
+        daily_futures = [
+            executor.submit(fetch_accu, 3, ts_value) for ts_value in daily_timestamps
+        ]
+
+        instant_map = instant_future.result()
+        hourly_series = [future.result() for future in hourly_futures]
+        daily_series = [future.result() for future in daily_futures]
+
+    return {
+        "base_ts": effective_base_ts,
+        "instant": instant_map,
+        "hourly": [
+            {"ts": ts_value, "data_map": data_map} for ts_value, data_map in hourly_series
+        ],
+        "daily": [
+            {"ts": ts_value, "data_map": data_map} for ts_value, data_map in daily_series
+        ],
+        "data_window": {
+            "hourly_ts": hourly_timestamps,
+            "daily_ts": daily_timestamps,
+        },
+    }
+
+
+def _attach_runtime_data_to_node(
+    node_payload: dict[str, Any], runtime_bundle: dict[str, Any]
+) -> dict[str, Any]:
+    node_id = str(node_payload.get("node_id") or "").strip()
+    instant_map = runtime_bundle.get("instant") or {}
+    hourly_series = runtime_bundle.get("hourly") or []
+    daily_series = runtime_bundle.get("daily") or []
+
+    node_payload["data"] = {
+        "instant": instant_map.get(node_id),
+        "accu": {
+            "hourly": [
+                {
+                    "ts": item["ts"],
+                    "values": item["data_map"].get(node_id),
+                }
+                for item in hourly_series
+            ],
+            "daily": [
+                {
+                    "ts": item["ts"],
+                    "values": item["data_map"].get(node_id),
+                }
+                for item in daily_series
+            ],
+        },
+    }
+    return node_payload
+
+
+def _attach_runtime_data_to_nodes(
+    nodes: list[dict[str, Any]], runtime_bundle: dict[str, Any]
+) -> list[dict[str, Any]]:
+    return [_attach_runtime_data_to_node(node_payload, runtime_bundle) for node_payload in nodes]
+
+
 def _load_node_map(
 def _load_node_map(
     session: Session, project_key: str, topology_id: int
     session: Session, project_key: str, topology_id: int
 ) -> dict[str, TopologyNode]:
 ) -> dict[str, TopologyNode]:
@@ -1079,6 +1592,7 @@ def _collect_descendants(
 def _topology_metadata_payload(
 def _topology_metadata_payload(
     registry_row: TopologyRegistry, group_path_text: str
     registry_row: TopologyRegistry, group_path_text: str
 ) -> dict[str, Any]:
 ) -> dict[str, Any]:
+    data_options = _load_json_text(registry_row.data_options_json)
     return {
     return {
         "topology_id": registry_row.topology_id,
         "topology_id": registry_row.topology_id,
         "topology_name": registry_row.topology_name,
         "topology_name": registry_row.topology_name,
@@ -1087,6 +1601,9 @@ def _topology_metadata_payload(
         "group_id": registry_row.group_id,
         "group_id": registry_row.group_id,
         "group_path_text": group_path_text,
         "group_path_text": group_path_text,
         "root_shape": registry_row.root_shape,
         "root_shape": registry_row.root_shape,
+        "data_options": data_options,
+        "metric_definitions": _build_metric_definitions(data_options),
+        "dimension_config": _load_json_text(registry_row.dimension_config_json),
     }
     }
 
 
 
 
@@ -1121,6 +1638,7 @@ def get_topology_node(
         node_row = _get_node_or_error(
         node_row = _get_node_or_error(
             session, project_key, topology_id, resolved_node_id
             session, project_key, topology_id, resolved_node_id
         )
         )
+    runtime_bundle = _fetch_topology_runtime_data(project_key, topology_id)
 
 
     parent_ids = parents_by_node.get(resolved_node_id, [])
     parent_ids = parents_by_node.get(resolved_node_id, [])
     child_ids = children_by_node.get(resolved_node_id, []) if include_children else []
     child_ids = children_by_node.get(resolved_node_id, []) if include_children else []
@@ -1133,13 +1651,20 @@ def get_topology_node(
         ]
         ]
 
 
     return {
     return {
+        "data_window": runtime_bundle["data_window"],
         "topology": _topology_metadata_payload(
         "topology": _topology_metadata_payload(
             registry_row, group_path_map.get(registry_row.group_id or -1, "")
             registry_row, group_path_map.get(registry_row.group_id or -1, "")
         ),
         ),
-        "node": _node_payload(node_row),
-        "parents": _node_list_payload(node_map, parent_ids),
-        "children": _node_list_payload(node_map, child_ids),
-        "siblings": _node_list_payload(node_map, sibling_ids),
+        "node": _attach_runtime_data_to_node(_node_payload(node_row), runtime_bundle),
+        "parents": _attach_runtime_data_to_nodes(
+            _node_list_payload(node_map, parent_ids), runtime_bundle
+        ),
+        "children": _attach_runtime_data_to_nodes(
+            _node_list_payload(node_map, child_ids), runtime_bundle
+        ),
+        "siblings": _attach_runtime_data_to_nodes(
+            _node_list_payload(node_map, sibling_ids), runtime_bundle
+        ),
     }
     }
 
 
 
 

+ 170 - 0
mcp-design.md

@@ -273,6 +273,82 @@ MCP 处理建议:
 - 当前支持 `page_size=-1` 全量返回
 - 当前支持 `page_size=-1` 全量返回
 - 供仪表搜索使用
 - 供仪表搜索使用
 
 
+## 3.8 拓扑详情与节点数据
+
+接口:
+
+- `POST /api/configapi/topo/get`
+- `POST /api/configapi/topo/get_data`
+
+用途:
+
+- `topo/get`:读取单张拓扑结构、`data_options`、`dimension_config`
+- `topo/get_data`:读取单张拓扑在某个展示维度和单个时间点下的节点数据
+
+关键约束:
+
+- `topo/get_data` 一次只支持一个拓扑、一个 `display`、一个时间点
+- `display=instant` 不带时间参数
+- `display=accu` 需要显式传 `accu_step` 和 `ts`
+- 不能直接返回时间范围数据,需要由 MCP 层做多次调用聚合
+
+当前 MCP 处理:
+
+- `topology.get_node` 查询时实时调用 1 次 `instant`
+- 再调用 12 次小时累计和 7 次日累计
+- 节点值按 `node_id` 直接匹配
+
+## 3.9 拓扑分组配置
+
+来源字段:
+
+- `topo/get` 返回中的 `dimension_config`
+
+适用范围:
+
+- 仅 `topology_type=2` 的拓扑图
+
+结构特点:
+
+- `dimension_config.dimensions` 表示分组配置
+- `dimension_config.filter` 表示筛选配置
+- `filter.conditions[].fields` 存储业务对象 ID 列表,需要额外解析名称
+
+枚举语义:
+
+- `order`: `1=顺序`,`2=逆序`
+- `filter_type`: `1=所有`,`2=任一`
+- `match_type`: `1=等于`,`2=不等于`,`3=包含`,`4=不包含`
+
+对象类型语义:
+
+- `11=位置`
+- `12=系统类型`
+- `13=系统`
+- `14=设备类型`
+- `15=设备`
+- `16=仪表类型`
+- `17=仪表型号`
+- `18=仪表`
+- `19=拓扑图分组`
+- `20=拓扑图`
+
+名称解析来源:
+
+- `11` -> `list_locations`
+- `12` -> `list_system_tree`
+- `13` -> `list_systems`
+- `14` -> `list_device_types`
+- `15` -> `search_devices`
+- `16` -> `list_meter_types`
+- `18` -> `search_meters`
+- `19` -> 本地拓扑分组缓存
+- `20` -> 本地拓扑注册缓存
+
+当前限制:
+
+- `17`(仪表型号)暂未实现名称解析,需要后续补接口或规则
+
 ## 4. 业务关系梳理
 ## 4. 业务关系梳理
 
 
 ## 4.1 位置
 ## 4.1 位置
@@ -518,6 +594,100 @@ MCP 处理建议:
 - 若当前页未找到目标结果,可继续传更大的 `page_num`
 - 若当前页未找到目标结果,可继续传更大的 `page_num`
 - 第一版不对 `measurement_flag` 做强校验,只透传查询条件
 - 第一版不对 `measurement_flag` 做强校验,只透传查询条件
 
 
+## 5.8 `topology.group_list`
+
+用途:
+
+- 返回缓存中的拓扑分组树
+
+建议入参:
+
+- `project_key: str`
+
+## 5.9 `topology.list`
+
+用途:
+
+- 返回缓存中的拓扑列表
+- 可按分组或对象类型过滤
+
+建议入参:
+
+- `project_key: str`
+- `group_id: int | None = None`
+- `object_type_code: int | None = None`
+
+## 5.10 `topology.get_node`
+
+用途:
+
+- 获取一个节点及其直接邻域
+- 同时返回当前拓扑的节点实时值和累计值窗口
+
+建议入参:
+
+- `project_key: str`
+- `topology_id: int`
+- `node_id: str = 'root'`
+- `include_siblings: bool = True`
+- `include_children: bool = True`
+
+返回结构:
+
+- `data_window`
+- `topology`
+- `node`
+- `parents`
+- `children`
+- `siblings`
+
+说明:
+
+- `topology` 中包含 `data_options`、`metric_definitions`、`dimension_config`
+- `node.data.instant` 为实时值
+- `node.data.accu.hourly` 为最近 12 个整点小时累计值
+- `node.data.accu.daily` 为最近 7 个日零点累计值
+
+## 5.11 `topology.get_group_config`
+
+用途:
+
+- 返回 type=2 拓扑图的分组配置和筛选配置
+
+建议入参:
+
+- `project_key: str`
+- `topology_id: int`
+
+返回结构:
+
+- `supported`
+- `raw_dimension_config`
+- `groupings`
+- `filter`
+
+说明:
+
+- `groupings` 由 `dimension_config.dimensions` 派生
+- `filter` 由 `dimension_config.filter` 派生
+- `filter.conditions[].field_items` 为解析后的 `id + name`
+
+## 5.12 `topology.find_context`
+
+用途:
+
+- 按实体快速反查命中的拓扑节点上下文
+
+建议入参:
+
+- `project_key: str`
+- `entity_type: str`
+- `entity_id: int`
+- `topology_id: int | None = None`
+- `include_siblings: bool = True`
+- `ancestor_depth: int = 5`
+- `descendant_depth: int = 2`
+
 ## 6. MCP 调用流程建议
 ## 6. MCP 调用流程建议
 
 
 ## 6.1 按位置查仪表
 ## 6.1 按位置查仪表

+ 12 - 0
opencode.jsonc

@@ -0,0 +1,12 @@
+{
+  "$schema": "https://opencode.ai/config.json",
+  "mcp": {
+    "instrument_config": {
+      "type": "remote",
+      "url": "http://127.0.0.1:8500/mcp",
+      "enabled": true,
+      "oauth": false,
+      "timeout": 15000
+    }
+  }
+}

+ 1 - 1
pyproject.toml

@@ -1,6 +1,6 @@
 [project]
 [project]
 name = "instrument-config-mcp"
 name = "instrument-config-mcp"
-version = "0.1.0"
+version = "1.0.0"
 description = "FastMCP server for instrument config APIs"
 description = "FastMCP server for instrument config APIs"
 readme = "mcp-design.md"
 readme = "mcp-design.md"
 requires-python = ">=3.11,<3.14"
 requires-python = ">=3.11,<3.14"

+ 108 - 0
scripts/smoke_test.py

@@ -5,6 +5,8 @@ import json
 from typing import Any
 from typing import Any
 
 
 from instrument_config_mcp.config_api import (
 from instrument_config_mcp.config_api import (
+    get_topology,
+    get_topology_data,
     list_device_types,
     list_device_types,
     list_locations,
     list_locations,
     list_meter_types,
     list_meter_types,
@@ -14,6 +16,13 @@ from instrument_config_mcp.config_api import (
     search_meters,
     search_meters,
     search_points,
     search_points,
 )
 )
+from instrument_config_mcp.topology_cache import (
+    find_topology_context,
+    get_topology_group_config,
+    get_topology_node,
+    list_topologies,
+    list_topology_groups,
+)
 
 
 
 
 def _parse_bool(raw: str) -> bool:
 def _parse_bool(raw: str) -> bool:
@@ -84,6 +93,45 @@ def build_parser() -> argparse.ArgumentParser:
     p.add_argument("--page-size", type=int, default=100)
     p.add_argument("--page-size", type=int, default=100)
     p.add_argument("--page-num", type=int, default=1)
     p.add_argument("--page-num", type=int, default=1)
 
 
+    p = subparsers.add_parser("get-topology")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--id", type=int, required=True)
+
+    p = subparsers.add_parser("get-topology-data")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--id", type=int, required=True)
+    p.add_argument("--display", choices=["instant", "accu"], required=True)
+    p.add_argument("--accu-step", type=int)
+    p.add_argument("--ts", type=int)
+
+    p = subparsers.add_parser("topology-group-list")
+    p.add_argument("--project-key", required=True)
+
+    p = subparsers.add_parser("topology-list")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--group-id", type=int)
+    p.add_argument("--object-type-code", type=int)
+
+    p = subparsers.add_parser("topology-get-node")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--topology-id", type=int, required=True)
+    p.add_argument("--node-id", default="root")
+    p.add_argument("--include-siblings", type=_parse_bool, default=True)
+    p.add_argument("--include-children", type=_parse_bool, default=True)
+
+    p = subparsers.add_parser("topology-get-group-config")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--topology-id", type=int, required=True)
+
+    p = subparsers.add_parser("topology-find-context")
+    p.add_argument("--project-key", required=True)
+    p.add_argument("--entity-type", choices=["meter", "device"], required=True)
+    p.add_argument("--entity-id", type=int, required=True)
+    p.add_argument("--topology-id", type=int)
+    p.add_argument("--include-siblings", type=_parse_bool, default=True)
+    p.add_argument("--ancestor-depth", type=int, default=5)
+    p.add_argument("--descendant-depth", type=int, default=2)
+
     return parser
     return parser
 
 
 
 
@@ -163,6 +211,66 @@ def main() -> None:
         )
         )
         return
         return
 
 
+    if args.command == "get-topology":
+        _print(get_topology(args.project_key, id=args.id))
+        return
+
+    if args.command == "get-topology-data":
+        _print(
+            get_topology_data(
+                args.project_key,
+                id=args.id,
+                display=args.display,
+                accu_step=args.accu_step,
+                ts=args.ts,
+            )
+        )
+        return
+
+    if args.command == "topology-group-list":
+        _print(list_topology_groups(args.project_key))
+        return
+
+    if args.command == "topology-list":
+        _print(
+            list_topologies(
+                args.project_key,
+                group_id=args.group_id,
+                object_type_code=args.object_type_code,
+            )
+        )
+        return
+
+    if args.command == "topology-get-node":
+        _print(
+            get_topology_node(
+                args.project_key,
+                args.topology_id,
+                args.node_id,
+                include_siblings=args.include_siblings,
+                include_children=args.include_children,
+            )
+        )
+        return
+
+    if args.command == "topology-get-group-config":
+        _print(get_topology_group_config(args.project_key, args.topology_id))
+        return
+
+    if args.command == "topology-find-context":
+        _print(
+            find_topology_context(
+                args.project_key,
+                args.entity_type,
+                args.entity_id,
+                topology_id=args.topology_id,
+                include_siblings=args.include_siblings,
+                ancestor_depth=args.ancestor_depth,
+                descendant_depth=args.descendant_depth,
+            )
+        )
+        return
+
     raise ValueError(f"unsupported command: {args.command}")
     raise ValueError(f"unsupported command: {args.command}")