Fatima T Al-Khawaldeh
We propose a novel Hierarchical Reinforcement Learning (HRL) Model to detect the factuality of claim trying to generate a new correct claim if not. Initially, we segment each sentence into several clauses using sentence-level discourse segmentation then measuring the cosine similarity to decide whether a clause is relevant to a claim or not. All relevant clauses will be sent to the high-level policy where deep communicating agents are implemented for encoding all relevant clauses. Each agent adopts the hierarchical attention mechanism, word-level and clause-level attention networks, to select informative words and clauses relevant to a specific claim. In word-level claim attention network, word encoding layer concatenates claim representation to each word embedding and then summarizes information by bi-directional LSTM. Word attention layer focuses on the terms that are important to the meaning of the clause with respect to the claim, producing clause vectors. In clause-level claim attention, clause encoding layer applies Bi-directional LSTM capture context clause representations. After that, in clause attention Layer, attention mechanism computes the attention weight between each claim-clause representation to produce contextual information conditioned on the claim representation. We will use the message sharing mechanism to help other agents’ encoders to generate better contextual information conditioned upon the messages received from other agents. The context and states from the environment are used to create all possible sub-goals (to copy or to generate) which should be achieved by the lower agent policy to select a series of actions (words) and produce a new sequence of words. We will apply a rewarder function to compute the factuality of the new claim using entailment and semantic similarity metrics.
ఈ కథనాన్ని భాగస్వామ్యం చేయండి