image description

Ranking Nodes in Temporal Graphs

QNode ranking in temporal networks are often impacted by heterogeneous context from node content, temporal, and structural di- mensions. This project introduces TGNet, a deep learning framework for node ranking in heterogeneous temporal graphs. TGNet utilizes a variant of Recurrent Neural Network to adapt context evolution and extract context features for nodes. It incorporates a novel influence network to dynamically estimate temporal and structural influence among nodes over time. To cope with label sparsity, it integrates graph smoothness constraints as a weak form of supervision. We show that the application of TGNet is feasible for large-scale networks by developing efficient learning and inference algorithms with optimization techniques. Using real-life data, we experimentally verify the effectiveness and efficiency of TGNet techniques. We also show that TGNet yields intuitive explanations for applications such as alert detection and academic impact rank- ing, as verified by our case study.

Why Ranking Nodes in Temporal Graphs is Different From in Static Graphs?

Temporal graphs have been widely applied to model dynamic networks. A temporal graph is a sequence of graph snapshots, where each snapshot (G,t) encodes a graph G occurs at an associated timestamp t. Emerging applications call for efficient predictive models that can effectively suggest and maintain the nodes with high priority in dynamic networks. The need for such models is evident in causal analysis, anomaly and attack detection, and link prediction in social networks.
Learning to rank node in static graphs has been studied. A common practice in prior work is to make use of latent features from graph structures to learn ranking functions. Learning node ranks in dynamic networks are nevertheless more involved than its counterpart in static networks. In particular, the node ranks can be influenced by rich context from heterogeneous node information, network structures, and temporal perspectives.

TGNet Example

While desirable, learning to rank nodes in temporal graphs is more challenging than its counterpart over static graphs.

Overview of TGNet Model.

Reduced summaries can be used in knowledge search, query suggestion and fact checking in knowledge graphs.

TGNet model
TGNet layer

Make the Model More Efficient?

We introduce influence network, a novel network layer utilized by TGNet to cope with context dynamics. Unlike conventional methods that adopt fixed pairwise influence influence network layer takes updated node context features as input, and dynamically estimates the amount of temporal and structural influence among the nodes.
“Node-centric” vs. “Edge-centric”. Conventional methods adopts “edge-centric” influence model, where the parameters are associated to edges. Nevertheless, the assumption of fixed edge influence is hard to hold for context changes. Moreover, it is daunting for users to choose edge types with expected generalization power for unknown testing data. In contrast, TGNet adopts an influence network layer, denoted as InfNet, which uses a “node-centric” approach to model influence. The layer InfNet is devised based on two intuitions: (1) The influence between two nodes is conditioned by their contexts; and (2) The node context is determined by its hidden state.

TGNet inference

Training the Model

Given labeled data, we introduce an end-to-end training algorithm that jointly learns the model parameters.

TGNet loss function

Datasets

We conducted experiments using four real-world datasets.

Datasets SLD IDS MAG APC
# of nodes 61.4K 5.7M 2.5M 2.1M
# of edges 258.1K 11.5M 16.2M 9.5M
# of snapshots 19.4K 66.7K 12K 1.5K
# of training node pairs 20K 100K 100K 100K

Publications

  1. TGNet: Learning to Rank Nodes in Temporal Graphs. [Paper][Slide]
    ACM International Conference on Information and Knowledge Management(CIKM), 2018.
    Qi Song, Bo Zong, Yinghui Wu, Lu-An Tang, Hui Zhang, Guofei Jiang and Haifeng Chen

Acknowledgements

This project was done when Qi was an intern at NEC Labs America and is supported in part by NSF IIS-1633629.