IntegriRef-Bench / l1_intent.jsonl
Geoffrey-Wang's picture
Upload folder using huggingface_hub
4fea62c verified
{"id": "l1_pair_001", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "As demonstrated by Vaswani et al. [1], attention mechanisms can replace recurrence entirely and yield superior performance on sequence-to-sequence tasks when sufficient computational resources are available.", "citation_key": "1", "cited_doi": "10.48550/arXiv.1706.03762", "cited_title": "Attention Is All You Need", "cited_abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "Strong SUPPORTING cue 'as demonstrated by' immediately before citation. Abstract confirms attention-only architecture."}
{"id": "l1_pair_002", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "Our experimental results are consistent with the findings of Rajpurkar et al. [2], confirming that reading comprehension models trained on large annotated datasets can match human performance on extractive question answering.", "citation_key": "2", "cited_doi": "10.18653/v1/D16-1264", "cited_title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "cited_abstract": "We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles. We show that logistic regression and neural network models trained on SQuAD approach human-level performance, with...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue phrase 'consistent with the findings of'. Both 'consistent with' and 'findings of' are in _SUPPORTING_CUES."}
{"id": "l1_pair_003", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "Following the approach of Devlin et al. [3], we pre-train a masked language model on domain-specific corpora before fine-tuning on downstream clinical NLP tasks.", "citation_key": "3", "cited_doi": "10.18653/v1/N19-1423", "cited_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "cited_abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING/USING via 'following the approach of'. Intent classifier merges USING → SUPPORTING in 3-class output."}
{"id": "l1_pair_004", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "This finding is in line with prior evidence from Chen et al. [4], who showed that dropout regularization reduces overfitting in deep neural networks even without explicit weight decay.", "citation_key": "4", "cited_doi": "10.5555/2627435.2670313", "cited_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "cited_abstract": "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue 'in line with prior evidence from'. Confirms abstract's finding about overfitting."}
{"id": "l1_pair_005", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "The model achieves an accuracy of 94.2% on the CIFAR-10 benchmark, consistent with results reported by He et al. [5] for deep residual networks of comparable depth.", "citation_key": "5", "cited_doi": "10.1109/CVPR.2016.90", "cited_title": "Deep Residual Learning for Image Recognition", "cited_abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING via 'consistent with results reported by'. Performance comparison context."}
{"id": "l1_pair_006", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "As shown by Hochreiter and Schmidhuber [6], gating mechanisms in long short-term memory networks effectively address the vanishing gradient problem, which our ablation study further confirms under longer sequence lengths.", "citation_key": "6", "cited_doi": "10.1162/neco.1997.9.8.1735", "cited_title": "Long Short-Term Memory", "cited_abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's 1991 analysis of this problem, then address it by introducing a novel, efficient, gradient-based...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue 'as shown by'. The claiming sentence correctly reflects the cited paper's contribution on vanishing gradients."}
{"id": "l1_pair_007", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "In contrast to the claims of Bengio et al. [7], our experiments show that curriculum learning does not consistently improve convergence when training transformers on text generation tasks.", "citation_key": "7", "cited_doi": "10.1145/1553374.1553380", "cited_title": "Curriculum learning", "cited_abstract": "Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them curriculum...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "Strong CONTRASTING cue 'in contrast to'. The citing paper disputes the scope of the cited work's benefit."}
{"id": "l1_pair_008", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "Unlike the model of Peters et al. [8], which requires task-specific fine-tuning to be competitive, our approach achieves strong zero-shot transfer across all evaluated domains.", "citation_key": "8", "cited_doi": "10.18653/v1/N18-1202", "cited_title": "Deep contextualized word representations", "cited_abstract": "We introduce a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts. Our word vectors are learned functions of the internal states of a deep bidirectional language...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'unlike'. Contrasts zero-shot capability vs. task-specific fine-tuning requirement."}
{"id": "l1_pair_009", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "The approach proposed by Goodfellow et al. [9] fails to account for training instability arising from mode collapse, a well-documented limitation of vanilla GAN formulations.", "citation_key": "9", "cited_doi": "10.48550/arXiv.1406.2661", "cited_title": "Generative Adversarial Nets", "cited_abstract": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'fails to account for' + 'limitation of'. Both are strong _CONTRASTING_CUES patterns."}
{"id": "l1_pair_010", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "While batch normalization as described in Ioffe and Szegedy [10] improves training stability, it suffers from significant performance degradation under small batch sizes, contrary to the authors' original claims.", "citation_key": "10", "cited_doi": "10.5555/3045118.3045167", "cited_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "cited_abstract": "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'suffers from'. 'While' alone is a weak marker but is combined with 'suffers from', a strong contrasting cue."}
{"id": "l1_pair_011", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "The findings of Obermeyer et al. [11] challenge the assumption that algorithmic risk scores are race-neutral; their analysis reveals that a widely deployed commercial algorithm exhibited significant racial bias in healthcare resource allocation.", "citation_key": "11", "cited_doi": "10.1126/science.aax2342", "cited_title": "Dissecting racial bias in an algorithm used to manage the health of populations", "cited_abstract": "Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this class of algorithms, exhibits significant racial bias: at a given risk score, Black patients are considerably sicker than White...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'challenge the assumption'. The cited paper itself presents contrasting evidence against the status quo."}
{"id": "l1_pair_012", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Several approaches to neural machine translation have been explored in recent years [12], including encoder-decoder architectures with attention and purely convolutional models.", "citation_key": "12", "cited_doi": "10.18653/v1/D14-1179", "cited_title": "Neural Machine Translation by Jointly Learning to Align and Translate", "cited_abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Pure background MENTIONING. No strong cue phrases. Enumerates prior work neutrally."}
{"id": "l1_pair_013", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Reinforcement learning from human feedback has been studied extensively [13], and remains an active research area with applications in dialogue systems and code generation.", "citation_key": "13", "cited_doi": "10.48550/arXiv.2203.02155", "cited_title": "Training language models to follow instructions with human feedback", "cited_abstract": "Making language models bigger does not inherently make them better at following a user's intent. Large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Background mention. 'Has been studied extensively' is a stock neutral academic phrase with no strong cue pattern."}
{"id": "l1_pair_014", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Bayesian optimization methods [14] represent one family of approaches to hyperparameter search, though they are not the focus of the current paper.", "citation_key": "14", "cited_doi": "10.5555/2999134.2999257", "cited_title": "Practical Bayesian Optimization of Machine Learning Algorithms", "cited_abstract": "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a dark art requiring expert experience, rules of thumb, or sometimes brute-force search.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Peripheral/scope-delimiting MENTIONING. The phrase 'not the focus of the current paper' is explicitly neutral."}
{"id": "l1_pair_015", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Graph neural networks [15] have been applied to a range of problems including drug discovery, social network analysis, and combinatorial optimization.", "citation_key": "15", "cited_doi": "10.1109/TNN.2008.2005605", "cited_title": "The Graph Neural Network Model", "cited_abstract": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Definitional/scope-setting MENTIONING. Lists applications without endorsing or disputing the cited method."}
{"id": "l1_pair_016", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "As demonstrated by Knijnenburg et al. [16], users reliably prefer personalized recommendations over non-personalized ones across all demographic groups and interface types tested.", "citation_key": "16", "cited_doi": "10.1145/2043932.2043956", "cited_title": "Explaining the User Experience of Recommender Systems", "cited_abstract": "We study the effect of system accuracy on user experience in recommender systems. We find that accuracy is an important but not the only determinant of user satisfaction. Perceived privacy risk, algorithm transparency, and individual differences significantly moderate the relationship between...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: The citing sentence says users 'reliably prefer' personalized recs 'across all demographic groups'. The actual paper found that individual differences and privacy concerns moderate this relationship — it is not universal. The cue phrase 'as demonstrated by' triggers SUPPORTING but the claim overgeneralizes a conditional finding."}
{"id": "l1_pair_017", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "Power et al. [17] showed that neural networks can generalize compositionally in a manner comparable to humans, validating the use of deep learning as a model of systematic human cognition.", "citation_key": "17", "cited_doi": "10.1162/tacl_a_00334", "cited_title": "SCAN: Learning to Compose Commands", "cited_abstract": "We introduce SCAN, a set of simple language navigation tasks that test compositional learning. We show that standard sequence-to-sequence and convolutional models fail to achieve human-like systematic generalization on SCAN, with performance dropping dramatically on compositionally novel test...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: The citing sentence inverts the conclusion. The paper actually found that standard NNs fail to generalize compositionally like humans. The citing sentence cites it as validation of deep learning's human-like compositionality — the exact opposite of the paper's finding."}
{"id": "l1_pair_018", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "The large-scale study of Ioannidis [18] confirmed that most published findings in biomedicine are reliable and reproducible when adequate sample sizes are used.", "citation_key": "18", "cited_doi": "10.1371/journal.pmed.0020124", "cited_title": "Why Most Published Research Findings Are False", "cited_abstract": "There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power, bias, the number of other studies on the same question, and, importantly, the ratio of true to not-true relationships among those probed in...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: A classic direct inversion. Ioannidis's paper argues most findings are FALSE; the citing sentence claims the paper confirmed findings are reliable. The cue 'confirmed' triggers SUPPORTING intent but the claim directly contradicts the paper's thesis."}
{"id": "l1_pair_019", "split": "l1_intent", "category": "all_mentioning", "signal": "none", "citing_sentence": "The field of information retrieval has a rich history of work on relevance ranking [19], document clustering [20], and query expansion [21].", "citation_key": "19", "cited_doi": "10.1145/312624.312679", "cited_title": "Okapi BM25: A Non-Binary Model of Document Indexing", "cited_abstract": "BM25 is a ranking function used by search engines to rank matching documents according to their relevance to a given search query. It is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "ALL_MENTIONING document case (first of two). All three citations [19], [20], [21] in this sentence are pure background enumerations. Tests the all_citations_mentioning signal which fires when every citation in a document is mentioning-only. In a document where [19], [20], [21] are the only citations and all are background, the signal should fire."}
{"id": "l1_pair_020", "split": "l1_intent", "category": "all_mentioning", "signal": "none", "citing_sentence": "Early neural approaches to text classification included convolutional networks [22] and recurrent architectures [23], each building on earlier statistical methods.", "citation_key": "22", "cited_doi": "10.18653/v1/D14-1181", "cited_title": "Convolutional Neural Networks for Sentence Classification", "cited_abstract": "We report on a series of experiments with convolutional neural networks trained on top of pre-trained word vectors for sentence-level classification tasks. With little hyperparameter tuning, the system achieves good results on multiple benchmarks, suggesting that the pre-trained vectors are good,...", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "ALL_MENTIONING document case (second of two). Combined with l1_pair_019, if these are the only citations in a document, all_citations_mentioning fires. Tests that the signal correctly identifies a paper that never cites evidence, only lists prior work."}