question
string | options
list | rationale
string | label
string | label_idx
int64 | dataset
string | chunk1
string | chunk2
string | chunk3
string |
|---|---|---|---|---|---|---|---|---|
Trees grow growth-rings one time each year; so which of these is correct?
|
[
"a tree with nine rings is nine years old",
"a tree with six rings is seven years old",
"a tree with eight rings is five years old",
"telling how old a tree is is impossible based on rings"
] |
Key fact:
a tree growing a tree-growth ring occurs once per year
|
A
| 0
|
openbookqa
|
in computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. each node in the tree can be connected to many children ( depending on the type of tree ), but must be connected to exactly one parent, except for the root node, which has no parent ( i. e., the root node as the top - most node in the tree hierarchy ). these constraints mean there are no cycles or " loops " ( no node can be its own ancestor ), and also that each child can be treated like the root node of its own subtree, making recursion a useful technique for tree traversal. in contrast to linear data structures, many trees cannot be represented by relationships between neighboring nodes ( parent and children nodes of a node under consideration, if they exist ) in a single straight line ( called edge or link between two adjacent nodes ). binary trees are a commonly used type, which constrain the number of children for each parent to at most two. when the order of the children is specified, this data structure corresponds to an ordered tree in graph theory. a value or pointer to other data may be associated with every node in the tree, or sometimes only with the leaf nodes, which have no children nodes. the abstract data type ( adt ) can be represented in a number of ways, including a list of parents with pointers to children, a list of children with pointers to parents, or
|
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
|
in computer science, a finger tree is a purely functional data structure that can be used to efficiently implement other functional data structures. a finger tree gives amortized constant time access to the " fingers " ( leaves ) of the tree, which is where data is stored, and concatenation and splitting logarithmic time in the size of the smaller piece. it also stores in each internal node the result of applying some associative operation to its descendants. this " summary " data stored in the internal nodes can be used to provide the functionality of data structures other than trees. overview ralf hinze and ross paterson state a finger tree is a functional representation of persistent sequences that can access the ends in amortized constant time. concatenation and splitting can be done in logarithmic time in the size of the smaller piece. the structure can also be made into a general purpose data structure by defining the split operation in a general form, allowing it to act as a sequence, priority queue, search tree, or priority search queue, among other varieties of abstract data types. a finger is a point where one can access part of a data structure ; in imperative languages, this is called a pointer. in a finger tree, the fingers are structures that point to the ends of a sequence, or the leaf nodes. the fingers are added on to the original tree to allow for constant time access to fingers. in the images shown below, the fingers are the lines reaching out of the
|
When looking at an eclipse, an important thing to remember is
|
[
"to take a photograph",
"to look through a window",
"to use hands to shield eyes",
"avert eyes at all costs"
] |
Key fact:
looking directly at an eclipse of the Sun causes harm to the eyes
|
D
| 3
|
openbookqa
|
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
microsoft access is a database management system ( dbms ) from microsoft that combines the relational access database engine ( ace ) with a graphical user interface and software - development tools. it is a member of the microsoft 365 suite of applications, included in the professional and higher editions or sold separately. microsoft access stores data in its own format based on the access database engine ( formerly jet database engine ). it can also import or link directly to data stored in other applications and databases. software developers, data architects and power users can use microsoft access to develop application software. like other microsoft office applications, access is supported by visual basic for applications ( vba ), an object - based programming language that can reference a variety of objects including the legacy dao ( data access objects ), activex data objects, and many other activex components. visual objects used in forms and reports expose their methods and properties in the vba programming environment, and vba code modules may declare and call windows operating system operations. history in the 1980s, microsoft access referred to an unrelated telecommunication program that provided terminal emulation and interfaces for ease of use in accessing online services such as dow jones, compuserve and electronic mailbox. with the popularization of personal computing at home and in the workplace, in the 1990s desktop databases became commonplace. prior to the introduction of access, borland ( with paradox ), ashton - tate ( with dbase, acquired by borland in 1991 ) and fox ( with foxpro )
|
Unless the animal is native to a cold climate like Antarctica, it is going to need to stay warm enough in winter so it
|
[
"Stays alive",
"stays dead",
"in cream",
"in juice"
] |
Key fact:
an animal requires warmth for survival
|
A
| 0
|
openbookqa
|
in database systems, durability is the acid property that guarantees that the effects of transactions that have been committed will survive permanently, even in cases of failures, including incidents and catastrophic events. for example, if a flight booking reports that a seat has successfully been booked, then the seat will remain booked even if the system crashes. formally, a database system ensures the durability property if it tolerates three types of failures : transaction, system, and media failures. in particular, a transaction fails if its execution is interrupted before all its operations have been processed by the system. these kinds of interruptions can be originated at the transaction level by data - entry errors, operator cancellation, timeout, or application - specific errors, like withdrawing money from a bank account with insufficient funds. at the system level, a failure occurs if the contents of the volatile storage are lost, due, for instance, to system crashes, like out - of - memory events. at the media level, where media means a stable storage that withstands system failures, failures happen when the stable storage, or part of it, is lost. these cases are typically represented by disk failures. thus, to be durable, the database system should implement strategies and operations that guarantee that the effects of transactions that have been committed before the failure will survive the event ( even by reconstruction ), while the changes of incomplete transactions, which have not been committed yet at the time of failure, will be reverted and will not affect the state of
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
When a plant is water the liquid goes from the soil area to where it is needed using what system?
|
[
"xylem",
"flowering pistols",
"sprinkler",
"leaves"
] |
Key fact:
xylem carries water from the roots of a plant to the leaves of a plant
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
|
Seasons of the year highly impact what?
|
[
"Moods",
"Environment",
"Behavior",
"Consumption"
] |
Key fact:
seasons cause change to the environment
|
B
| 1
|
openbookqa
|
implicit data collection is used in humancomputer interaction to gather data about the user in an implicit, non - invasive way. overview the collection of user - related data in humancomputer interaction is used to adapt the computer interface to the end user. the data collected are used to build a user model. the user model is then used to help the application to filter the information for the end user. such systems are useful in recommender applications, military applications ( implicit stress detection ) and others. channels for collecting data the system can record the user's explicit interaction and thus build an mpeg7 usage history log. furthermore, the system can use other channels to gather information about the user's emotional state. the following implicit channels have been used so far to get the affective state of the end user : facial activity posture activity hand tension and activity gestural activity vocal expression language and choice of words electrodermal activity eye tracking emotional spaces the detected emotional value is usually described any of the two most popular notations : a 3d emotional vector : valence, arousal, dominance degree of affiliation to the 6 basic emotions ( sadness, happiness, anger, fear, disgust, surprise ) external links evaluating affective interactions : alternatives to asking what users feel rosalind picard, shaundra bryant daily
|
behavior informatics ( bi ) is the informatics of behaviors so as to obtain behavior intelligence and behavior insights. bi is a research method combining science and technology, specifically in the area of engineering. the purpose of bi includes analysis of current behaviors as well as the inference of future possible behaviors. this occurs through pattern recognition. different from applied behavior analysis from the psychological perspective, bi builds computational theories, systems and tools to qualitatively and quantitatively model, represent, analyze, and manage behaviors of individuals, groups and / or organizations. bi is built on classic study of behavioral science, including behavior modeling, applied behavior analysis, behavior analysis, behavioral economics, and organizational behavior. typical bi tasks consist of individual and group behavior formation, representation, computational modeling, analysis, learning, simulation, and understanding of behavior impact, utility, non - occurring behaviors etc. for behavior intervention and management. the behavior informatics approach to data utilizes cognitive as well as behavioral data. by combining the data, bi has the potential to effectively illustrate the big picture when it comes to behavioral decisions and patterns. one of the goals of bi is also to be able to study human behavior while eliminating issues like self - report bias. this creates more reliable and valid information for research studies. behavior analytics behavior informatics covers behavior analytics which focuses on analysis and learning of behavioral data. behavior from an informatics perspective, a behavior consists of three key elements : actors ( behavioral subjects and objects ), operations ( actions, activities ) and
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
An acquired characteristic is
|
[
"a jagged raised welt you've had since you fell down the stairs 6 years ago",
"freckles from your mom's genes",
"brown, curly hair that resembles your sister's",
"a large nose just like your dad's"
] |
Key fact:
a scar is an acquired characteristic
|
A
| 0
|
openbookqa
|
edas was a database of alternatively spliced human genes. it doesn't seem to exist anymore. see also aspicdb database references external links http : / / www. gene - bee. msu. ru / edas /.
|
in bioinformatics, a gene disease database is a systematized collection of data, typically structured to model aspects of reality, in a way to comprehend the underlying mechanisms of complex diseases, by understanding multiple composite interactions between phenotype - genotype relationships and gene - disease mechanisms. gene disease databases integrate human gene - disease associations from various expert curated databases and text mining derived associations including mendelian, complex and environmental diseases. introduction experts in different areas of biology and bioinformatics have been trying to comprehend the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. for some illnesses, it has become apparent that it is the right amount of animosity is made for not enough to obtain an index of the disease - related genes but to uncover how disruptions of molecular grids in the cell give rise to disease phenotypes. moreover, even with the unprecedented wealth of information available, obtaining such catalogues is extremely difficult. genetic broadly speaking, genetic diseases are caused by aberrations in genes or chromosomes. many genetic diseases are developed from before birth. genetic disorders account for a significant number of the health care problems in our society. advances in the understanding of this diseases have increased both the life span and quality of life for many of those affected by genetic disorders. recent developments in bioinformatics and laboratory genetics have made possible the better delineation of certain malformation and mental retardation syndromes, so that their mode of inheritance
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
a hawk will use their claws to touch which of the following?
|
[
"mouse entity",
"nuts and berries",
"lion",
"rhino"
] |
Key fact:
a mouse gives birth to live young
|
A
| 0
|
openbookqa
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
A person has sand in their shoe from the beach, and they dump the sand out at their doorstep many miles away. The sand could end up back on that beach if
|
[
"magic creatures move it",
"someone goes to the zoo",
"a large gust passes",
"people wish for it"
] |
Key fact:
wind carries sand from one place to another place
|
C
| 2
|
openbookqa
|
a zoo ( short for zoological garden ; also called an animal park or menagerie ) is a facility in which animals are kept within enclosures for public exhibition and often bred for conservation purposes. the term zoological garden refers to zoology, the study of animals. the term is derived from the ancient greek, zion,'animal ', and the suffix -, - logia,'study of '. the abbreviation zoo was first used of the london zoological gardens, which was opened for scientific study in 1828, and to the public in 1847. the first modern zoo was the tierpark hagenbeck by carl hagenbeck in germany. in the united states alone, zoos are visited by over 181 million people annually. etymology the london zoo, which was opened in 1828, was initially known as the " gardens and menagerie of the zoological society of london ", and it described itself as a menagerie or " zoological forest ". the abbreviation " zoo " first appeared in print in the united kingdom around 1847, when it was used for the clifton zoo, but it was not until some 20 years later that the shortened form became popular in the rhyming song " walking in the zoo " by music - hall artist alfred vance. the term " zoological park " was used for more expansive facilities in halifax, nova scotia, washington, d. c., and the bronx in new york, which opened in 1846, 1891 and 1899 respectively. relatively new terms for zoos, in the late
|
in ethology, animal locomotion is any of a variety of methods that animals use to move from one place to another. some modes of locomotion are ( initially ) self - propelled, e. g., running, swimming, jumping, flying, hopping, soaring and gliding. there are also many animal species that depend on their environment for transportation, a type of mobility called passive locomotion, e. g., sailing ( some jellyfish ), kiting ( spiders ), rolling ( some beetles and spiders ) or riding other animals ( phoresis ). animals move for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators. for many animals, the ability to move is essential for survival and, as a result, natural selection has shaped the locomotion methods and mechanisms used by moving organisms. for example, migratory animals that travel vast distances ( such as the arctic tern ) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non - migratory animals that must frequently move quickly to escape predators are likely to have energetically costly, but very fast, locomotion. the anatomical structures that animals use for movement, including cilia, legs, wings, arms, fins, or tails are sometimes referred to as locomotory organs or locomotory structures. etymology the term " locomotion " is formed in english from latin loco " from a
|
zootomy is the branch of zoology that deals with the anatomical structure of animals. it involves the study and comparison of the physical structures of different animal species to understand their morphology. in this context, physiology would be concerned with the functions of those structures, neuroanatomy with the nervous system structures, and embryology with the development of structures from the embryo stage. therefore, the correct choice is morphology, which directly relates to the form and structure of anatomical features in animals.
|
If an electric circuit has six paths and another electric circuit has one path, the six path circuit is
|
[
"really broken",
"singular",
"equidistant",
"slowly burning"
] |
Key fact:
if electricity flows along more than one pathway then the circuit is parallel
|
C
| 2
|
openbookqa
|
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
|
debug gdb ddt, a pdp - 10 debugger from dec used as a command shell for the mit incompatible timesharing system firebug / chromebug, a javascript shell and debugging environment as a firefox plugin
|
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
|
How long does it take Earth to fully revolve one time
|
[
"28 hours",
"46 hours",
"1400 minutes",
"1440 minutes"
] |
Key fact:
a Rotation of the Earth on itself takes one day
|
D
| 3
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
Pollination is required for what reproduction?
|
[
"elephant",
"bug",
"bird",
"flora"
] |
Key fact:
plant reproduction requires pollination
|
D
| 3
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
In the morning, Rebecca saw some fluid in the gutter. Later, it was gone. What happened to it?
|
[
"condensation",
"evaporation",
"magic",
"deposition"
] |
Key fact:
evaporation is when water is drawn back up into the air in the water cycle
|
B
| 1
|
openbookqa
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
electroless deposition ( ed ) or electroless plating is a chemical process by which metals and metal alloys are deposited onto a surface. electroless deposition uses a chemical reaction that causes a metal to precipitate and coat nearby surfaces. it is dubbed " electroless " because prior processes use an electric current which is referred to as electroplating. electroless deposition thus can occur on non - conducting surfaces, making it possible to coat diverse materials including plastics, ceramics, and glass, etc. ed produced films can decorative, anti - corrosive, and conductive. common applications of ed include films and mirrors containing nickel and / or silver. electroless deposition changes the mechanical, magnetic, internal stress, conductivity, and brightening of the substrate. the first industrial application of electroless deposition by the leonhardt plating company has flourished into metallization of plastics, textiles, prevention of corrosion, and jewelry. the microelectronics industry uses ed in the manufacturing of circuit boards, semi - conductive devices, batteries, and sensors. comparison with other methods electroplating is generally cheaper than ed. unlike ed, electroplating only deposits on other conductive or semi - conductive materials. requiring an applied current, the instrumentation for electroplating is more complex. electroless deposition deposits metals onto 2d and 3d structures, whereas other plating methods such as physical vapor deposition ( pvd ), chemical vapor deposition ( cvd ) are limited to 2d surfaces. electroless
|
A source of heat might be
|
[
"rubbing noses",
"swimming in Antartica",
"touching ice",
"sitting in freezers"
] |
Key fact:
a car engine is a source of heat
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
arangodb is a graph database system developed by arangodb inc. arangodb is a multi - model database system since it supports three data models ( graphs, json documents, key / value ) with one database core and a unified query language aql ( arangodb query language ). aql is mainly a declarative language and allows the combination of different data access patterns in a single query. arangodb is a nosql database system but aql is similar in many ways to sql, it uses rocksdb as a storage engine. history arangodb gmbh was founded in 2014 by claudius weinberger and frank celler. they originally called the database system a versatile object container ", or avoc for short, leading them to call the database avocadodb. later, they changed the name to arangodb. the word " arango " refers to a little - known avocado variety grown in cuba. in january 2017 arangodb raised a seed round investment of 4. 2 million euros led by target partners. in march 2019 arangodb raised 10 million dollars in series a funding led by bow capital. in october 2021 arangodb raised 27. 8 million dollars in series b funding led by iris capital. release history features json : arangodb uses json as a default storage format, but internally it uses arangodb velocypack a fast and compact binary format for serialization and storage. arango
|
matrixdb is a biological database focused on molecular interactions between extracellular proteins and polysaccharides. matrixdb takes into account the multimeric nature of the extracellular proteins ( for example, collagens, laminins and thrombospondins are multimers ). the database was initially released in 2009 and is maintained by the research group of sylvie ricard - blum at umr5246, claude bernard university lyon 1. matrixdb is linked with unigene and the human protein atlas. it also allows users to build customised tissue - and disease - specific interaction networks, which can be further analysed and visualised using cytoscape or medusa. matrixdb is an active member of the international molecular exchange consortium ( imex ), a group of the major public providers of interaction data. other participating databases include the biomolecular interaction network database ( bind ), intact, the molecular interaction database ( mint ), mips, mpact, and biogrid. the databases of imex work together to prevent duplications of effort, collecting data from non - overlapping sources and sharing the curated interaction data. the imex consortium also worked to develop the hupo - psi - mi xml standard format for annotating and exchanging interaction data. matrixdb includes interaction data extracted from the literature by manual curation and offers access to relevant data involving extracellular proteins provided by imex partner databases through the psicquic webservice
|
What dissolves in water when combined?
|
[
"crystal carbohydrates",
"iron",
"oil",
"plastic"
] |
Key fact:
sugar dissolves in water when they are combined
|
A
| 0
|
openbookqa
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
|
which of these can be considered a stage in the water cycle?
|
[
"all of these",
"the combination of nails and hammers",
"the presence of H20",
"the combination of chlorine and gas"
] |
Key fact:
condensation is a stage in the water cycle process
|
C
| 2
|
openbookqa
|
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
|
the simplest organic compounds are hydrocarbons and are composed of carbon and hydrogen.
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
In order for crops to grow food safely, pesticides are used on them. When it floods, this causes what to be poisonous?
|
[
"air",
"farmers",
"Corn",
"Runoff"
] |
Key fact:
crop rotation renews soil
|
D
| 3
|
openbookqa
|
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
|
A bird that finds itself endangered is
|
[
"sees others of its species in large amounts",
"friendly with many other birds",
"glad to have so much company",
"unlikely to meet more of its type"
] |
Key fact:
endangered means low in population
|
D
| 3
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
A boy hates summer with a burning passion, so luckily the longest he should ever have to endure the season is
|
[
"twelve weeks",
"three years",
"nine months",
"two days"
] |
Key fact:
a new season occurs once per three months
|
A
| 0
|
openbookqa
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
Mammals are one of a few animals that's core temp
|
[
"fluctuates",
"stays the same",
"drops suddenly",
"heats up"
] |
Key fact:
a mammal is warm-blooded
|
B
| 1
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
in statistical mechanics, thermal fluctuations are random deviations of an atomic system from its average state, that occur in a system at equilibrium. all thermal fluctuations become larger and more frequent as the temperature increases, and likewise they decrease as temperature approaches absolute zero. thermal fluctuations are a basic manifestation of the temperature of systems : a system at nonzero temperature does not stay in its equilibrium microscopic state, but instead randomly samples all possible states, with probabilities given by the boltzmann distribution. thermal fluctuations generally affect all the degrees of freedom of a system : there can be random vibrations ( phonons ), random rotations ( rotons ), random electronic excitations, and so forth. thermodynamic variables, such as pressure, temperature, or entropy, likewise undergo thermal fluctuations. for example, for a system that has an equilibrium pressure, the system pressure fluctuates to some extent about the equilibrium value. only the'control variables'of statistical ensembles ( such as the number of particules n, the volume v and the internal energy e in the microcanonical ensemble ) do not fluctuate. thermal fluctuations are a source of noise in many systems. the random forces that give rise to thermal fluctuations are a source of both diffusion and dissipation ( including damping and viscosity ). the competing effects of random drift and resistance to drift are related by the fluctuation - dissipation theorem. thermal fluctuations play a major role
|
in database technologies, a rollback is an operation which returns the database to some previous state. rollbacks are important for database integrity, because they mean that the database can be restored to a clean copy even after erroneous operations are performed. they are crucial for recovering from database server crashes ; by rolling back any transaction which was active at the time of the crash, the database is restored to a consistent state. the rollback feature is usually implemented with a transaction log, but can also be implemented via multiversion concurrency control. cascading rollback a cascading rollback occurs in database systems when a transaction ( t1 ) causes a failure and a rollback must be performed. other transactions dependent on t1's actions must also be rollbacked due to t1's failure, thus causing a cascading effect. that is, one transaction's failure causes many to fail. practical database recovery techniques guarantee cascadeless rollback, therefore a cascading rollback is not a desirable result. cascading rollback is scheduled by dba. sql sql refers to structured query language, a kind of language used to access, update and manipulate database. in sql, rollback is a command that causes all data changes since the last start transaction or begin to be discarded by the relational database management systems ( rdbms ), so that the state of the data is " rolled back " to the way it was before those changes were made. a rollback statement
|
Which is a byproduct of a lightbulb?
|
[
"taste",
"death",
"warmth",
"sound"
] |
Key fact:
an incandescent light bulb converts electricity into heat by sending electricity through a filament
|
C
| 2
|
openbookqa
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
the spectral database for organic compounds ( sdbs ) is a free online searchable database hosted by the national institute of advanced industrial science and technology ( aist ) in japan, that contains spectral data for ca 34, 000 organic molecules. the database is available in english and in japanese and it includes six types of spectra : laser raman spectra, electron ionization mass spectra ( ei - ms ), fourier - transform infrared ( ft - ir ) spectra, 1h nuclear magnetic resonance ( 1h - nmr ) spectra, 13c nuclear magnetic resonance ( 13c - nmr ) spectra and electron paramagnetic resonance ( epr ) spectra. the construction of the database started in 1982. most of the spectra were acquired and recorded in aist and some of the collections are still being updated. since 1997, the database can be accessed free of charge, but its use requires agreeing to a disclaimer ; the total accumulated number of times accessed reached 550 million by the end of january, 2015. content laser raman spectra the database contains ca 3, 500 raman spectra. the spectra were recorded in the region of 4, 000 0 cm1 with an excitation wavelength of 4, 800 nm and a slit width of 100 200 micrometers. this collection is not being updated. electron ionization mass ( ei - ms ) spectra the ei - ms spectra were measured in a jeol jms - 01sg or a jeol jms - 700 spectrom
|
Which is the best thing to do for a neighborhood?
|
[
"growing local daffodils and weeding",
"spreading trash and garbage",
"planting fast spreading plants",
"introducing a new species"
] |
Key fact:
planting native plants has a positive impact on an ecosystem
|
A
| 0
|
openbookqa
|
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
|
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
If a cow is offered a choice, it will turn down
|
[
"a bale of hay",
"a piece of carrot",
"a chunk of apple",
"a chunk of pork"
] |
Key fact:
cows only eat plants
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
Like humans, when some animals get hot, they do what in order to lower their body temperature
|
[
"spend money",
"raise blood temperature",
"fly away",
"perspire"
] |
Key fact:
sweat is used for adjusting to hot temperatures by some animals
|
D
| 3
|
openbookqa
|
a cost database is a computerized database of cost estimating information, which is normally used with construction estimating software to support the formation of cost estimates. a cost database may also simply be an electronic reference of cost data. overview a cost database includes the electronic equivalent of a cost book, or cost reference book, a tool used by estimators for many years. cost books may be internal records at a particular company or agency, or they may be commercially published books on the open market. aec teams and federal agencies can and often do collect internally sourced data from their own specialists, vendors, and partners. this is valuable personalized cost data that is captured but often doesn't cover the same range that commercial cost book data can. internally sourced data is difficult to maintain and do not have the same level of developed user interface or functionalities as a commercial product. the cost database may be stored in relational database management system, which may be in either an open or proprietary format, serving the data to the cost estimating software. the cost database may be hosted in the cloud. estimators use a cost database to store data in structured way which is easy to manage and retrieve. details costing data the most basic element of a cost estimate and therefore the cost database is the estimate line item or work item. an example is " concrete, 4000 psi ( 30 mpa ), " which is the description of the item. in the cost database, an item is a row or record in
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
the countercurrent heat exchange mechanism is a physiological adaptation found in many animals, particularly those living in cold environments. it involves the close proximity of arteries and veins, which allows for the transfer of heat from warm arterial blood to cooler venous blood as it returns to the core of the body. this system minimizes the loss of heat to the environment by warming the blood returning to the core, thereby conserving body heat. in contrast, without this mechanism, the extremities would lose much more heat, and the returning blood would cool the core body temperature, requiring additional metabolic energy to maintain thermal homeostasis.
|
A skunk produces a bad what?
|
[
"heat",
"smeller perception",
"cold",
"color"
] |
Key fact:
a skunk produces a bad odor
|
B
| 1
|
openbookqa
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
a heat map ( or heatmap ) is a 2 - dimensional data visualization technique that represents the magnitude of individual values within a dataset as a color. the variation in color may be by hue or intensity. in some applications such as crime analytics or website click - tracking, color is used to represent the density of data points rather than a value associated with each point. " heat map " is a relatively new term, but the practice of shading matrices has existed for over a century. history heat maps originated in 2d displays of the values in a data matrix. larger values were represented by small dark gray or black squares ( pixels ) and smaller values by lighter squares. the earliest known example dates to 1873, when toussaint loua used a hand - drawn and colored shaded matrix to visualize social statistics across the districts of paris. the idea of reordering rows and columns to reveal structure in a data matrix, known as seriation, was introduced by flinders petrie in 1899. in 1950, louis guttman developed the scalogram, a method for ordering binary matrices to expose a one - dimensional scale structure. in 1957, peter sneath displayed the results of a cluster analysis by permuting the rows and the columns of a matrix to place similar values near each other according to the clustering. this idea was implemented by robert ling in 1973 with a computer program called shade. ling used overstruck printer characters to represent different shades of gray, one character -
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
If a tree is cut down what will happen to it?
|
[
"vitality extinguishing",
"growth",
"vigor",
"life"
] |
Key fact:
if a tree is cut down then that tree will die
|
A
| 0
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
in bioinformatics, a gene disease database is a systematized collection of data, typically structured to model aspects of reality, in a way to comprehend the underlying mechanisms of complex diseases, by understanding multiple composite interactions between phenotype - genotype relationships and gene - disease mechanisms. gene disease databases integrate human gene - disease associations from various expert curated databases and text mining derived associations including mendelian, complex and environmental diseases. introduction experts in different areas of biology and bioinformatics have been trying to comprehend the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. for some illnesses, it has become apparent that it is the right amount of animosity is made for not enough to obtain an index of the disease - related genes but to uncover how disruptions of molecular grids in the cell give rise to disease phenotypes. moreover, even with the unprecedented wealth of information available, obtaining such catalogues is extremely difficult. genetic broadly speaking, genetic diseases are caused by aberrations in genes or chromosomes. many genetic diseases are developed from before birth. genetic disorders account for a significant number of the health care problems in our society. advances in the understanding of this diseases have increased both the life span and quality of life for many of those affected by genetic disorders. recent developments in bioinformatics and laboratory genetics have made possible the better delineation of certain malformation and mental retardation syndromes, so that their mode of inheritance
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
Which of these animals could keep itself warmest?
|
[
"a squirrel",
"a whale",
"a chihuahua",
"a cat"
] |
Key fact:
fat is used to keep animals warm
|
B
| 1
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
Conservation is
|
[
"using Styrofoam plates for every meal",
"reusing gift bags again and again",
"throwing plastic bottles in the landfill",
"driving a gas guzzling truck"
] |
Key fact:
An example of conservation is avoiding waste
|
B
| 1
|
openbookqa
|
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
object storage ( also known as object - based storage or blob storage ) is a computer data storage approach that manages data as " blobs " or " objects ", as opposed to other storage architectures like file systems, which manage data as a file hierarchy, and block storage, which manages data as blocks within sectors and tracks. each object is typically associated with a variable amount of metadata, and a globally unique identifier. object storage can be implemented at multiple levels, including the device level ( object - storage device ), the system level, and the interface level. in each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that are directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data - management functions like data replication and data distribution at object - level granularity. object storage systems allow retention of massive amounts of unstructured data in which data is written once and read once ( or many times ). object storage is used for purposes such as storing objects like videos and photos on facebook, songs on spotify, or files in online collaboration services, such as dropbox. one of the limitations with object storage is that it is not intended for transactional data, as object storage was not designed to replace nas file access and sharing ; it does not support the locking and sharing mechanisms needed to maintain a single, accurately updated version of a file. history origins jim starkey coined
|
Why should people conserve gas when fueling their cars?
|
[
"because it can only be used once",
"because it is hard to find",
"because the more it is used the more it costs",
"because it can mess up their engines"
] |
Key fact:
fossil fuels are a nonrenewable resource
|
A
| 0
|
openbookqa
|
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
|
being able to attach data probes and to see a program run gives one a grasp of detail that is hard to obtain in any other way. a program's execution need not be controlled by the usual explicit sequential flow conventions.
|
there can also be production configurations that cause security problems. these issues can put the legacy system at risk of being compromised by attackers or knowledgeable insiders. integration with newer systems may also be difficult because new software may use completely different technologies.
|
An iris may have trouble thriving in an arid location, or even in a frozen location, because it needs
|
[
"to be dried and preserved",
"a certain climate to boom",
"a bucket of ice water",
"to be buried in snow"
] |
Key fact:
a plant requires a specific climate to grow and survive
|
B
| 1
|
openbookqa
|
water that flows over the land from precipitation or melting snow or ice.
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
Coral gets some help with algae for their
|
[
"love",
"dating advice",
"happiness",
"vibrance"
] |
Key fact:
usually coral lives in warm water
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
A kangaroo can have multiple babies at various stages of life at the same time. These joeys can show remarkable instincts, in that they
|
[
"hallucinate",
"nurse",
"scream",
"are born"
] |
Key fact:
An example of an instinct is the kangaroo 's ability to crawl into its mother 's pouch to drink milk
|
B
| 1
|
openbookqa
|
in the field of artificial intelligence ( ai ), a hallucination or artificial hallucination ( also called bullshitting, confabulation or delusion ) is a response generated by ai that contains false or misleading information presented as fact. this term draws a loose analogy with human psychology, where hallucination typically involves false percepts. however, there is a key difference : ai hallucination is associated with erroneously constructed responses ( confabulation ), rather than perceptual experiences. for example, a chatbot powered by large language models ( llms ), like chatgpt, may embed plausible - sounding random falsehoods within its generated content. researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27 % of the time, with factual errors present in 46 % of generated texts. detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of llms in real - world scenarios. some people believe the specific term " ai hallucination " unreasonably anthropomorphizes computers. term origin in 1995, stephen thaler demonstrated how hallucinations and phantom experiences emerge from artificial neural networks through random perturbation of their connection weights. in the early 2000s, the term " hallucination " was used in computer vision with a positive connotation to describe the process of adding detail to an image. for example, the task of
|
the hospital records database is a database provided by the wellcome trust and uk national archives which provides information on the existence and location of the records of uk hospitals. this includes the location and dates of administrative and clinical records, the existence of catalogues, and links to some online hospital catalogues. the website was proposed as a resource of the month by the royal society of medicine in 2009 references external links hospital records database smart clinics
|
the eukaryotic pathogen vector and host database, or veupathdb, is a database of genomics and experimental data related to various eukaryotic pathogens. it was established in 2006 under a national institutes of health program to create bioinformatics resource centers to facilitate research on pathogens that may pose biodefense threats. veupathdb stores data related to its organisms of interest and provides tools for searching through and analyzing the data. it currently consists of 14 component databases, each dedicated to a certain research topic. veupathdb includes : genomics resources covering eukaryotic protozoan parasites host responses to parasite infection ( hostdb ) orthologs ( orthomcl ) clinical study data ( clinepidb ) microbiome data ( microbiomedb ) history veupathdb was established under the nih bioinformatics resource centers program as apidb, a resource meant to cover apicomplexan parasites. apidb originally consisted of component sites cryptodb ( for cryptosporidium ), plasmodb ( for plasmodium ), and toxodb ( for toxoplasma gondii ). as apidb grew to focus on eukaryotic pathogens beyond apicomplexans, the name was changed to eupathdb to support its broadened scope. eupathdb was the result of collaboration between many different parasitologists, including david roos,
|
Having food has a positive impact on the health of what?
|
[
"organism charts",
"water",
"sunlight",
"critters"
] |
Key fact:
having food has a positive impact on an organism 's health
|
D
| 3
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
in bioinformatics, the ciliate mds / ies database is a biological database of spirotrich genes. see also spirotrich references external links http : / / oxytricha. princeton. edu / dimorphism / database. htm.
|
the world register of marine species ( worms ) is a taxonomic database that aims to provide an authoritative and comprehensive catalogue and list of names of marine organisms. content the content of the registry is edited and maintained by scientific specialists on each group of organism. these taxonomists control the quality of the information, which is gathered from the primary scientific literature as well as from some external regional and taxon - specific databases. worms maintains valid names of all marine organisms, but also provides information on synonyms and invalid names. it is an ongoing task to maintain the registry, since new species are constantly being discovered and described by scientists ; in addition, the nomenclature and taxonomy of existing species is often corrected or changed as new research is constantly being published. subsets of worms content are made available, and can have separate badging and their own home / launch pages, as " subregisters ", such as the world list of marine acanthocephala, world list of actiniaria, world amphipoda database, world porifera database, and so on. as of december 2018 there were 60 such taxonomic subregisters, including a number presently under construction. a second category of subregisters comprises regional species databases such as the african register of marine species, belgian register of marine species, etc., while a third comprises thematic subsets such as the world register of deep - sea species ( wordss ), world register of introduced marine species ( wrims ), etc
|
the nervous system sends observations in the form of electrical signals to what?
|
[
"cell towers",
"persons flesh",
"computers",
"plugs"
] |
Key fact:
the nervous system sends observations in the form of electrical signals to the rest of the body
|
B
| 1
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
If a day has passed, so has
|
[
"most of a month",
"most of thirty hours",
"most of a week",
"most of a year"
] |
Key fact:
one day is equal to 24 hours
|
B
| 1
|
openbookqa
|
a calendar queue ( cq ) is a priority queue ( queue in which every element has associated priority and the dequeue operation removes the highest priority element ). it is analogous to desk calendar, which is used by humans for ordering future events by date. discrete event simulations require a future event list ( fel ) structure that sorts pending events according to their time. such simulators require a good and efficient data structure as time spent on queue management can be significant. the calendar queue ( with optimum bucket size ) can approach o ( 1 ) average performance. calendar queues are closely related to bucket queues but differ from them in how they are searched and in being dynamically resized. implementation theoretically, like a bucket queue, a calendar queue consists of an array of linked lists. sometimes each index in the array is also referred to as a bucket. the bucket has specified width and its linked list holds events whose timestamp maps to that bucket. a desk calendar has 365 buckets for each day with a width of one day. each array element contains one pointer that is the head of the corresponding linked list. if the array name is " month " then month [ 11 ] is a pointer to the list of events scheduled for the 12th month of the year ( the vector index starts from 0 ). the complete calendar thus consists of an array of 12 pointers and a collection of up to 12 linked lists. in calendar queue, enqueue ( addition in a queue ) and
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
An incandescent light bulb requires a filament to
|
[
"emit illumination",
"emit radiation",
"convert mechanical energy",
"convert chemical energy"
] |
Key fact:
an incandescent light bulb converts electricity into light by sending electricity through a filament
|
A
| 0
|
openbookqa
|
most circuits have devices such as light bulbs that convert electrical energy to other forms of energy. in the case of a light bulb, electrical energy is converted to light and thermal energy.
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
in nuclear data evaluation and validation, a library and a database serve different purposes, but both are essential for accurate predictions in theoretical nuclear reactor models. a nuclear data library is a collection of evaluated nuclear data files that contain information about various nuclear reactions, decay processes, and other relevant properties of atomic nuclei. these libraries are created through a rigorous evaluation process that combines experimental data, theoretical models, and statistical methods to provide the best possible representation of nuclear properties. some well - known nuclear data libraries include endf ( evaluated nuclear data file ), jeff ( joint evaluated fission and fusion ), and jendl ( japanese evaluated nuclear data library ). on the other hand, a nuclear database is a structured and organized collection of raw experimental and theoretical data related to nuclear reactions and properties. these databases store information from various sources, such as experimental measurements, theoretical calculations, and simulations. they serve as a primary source of information for nuclear data evaluators when creating nuclear data libraries. examples of nuclear databases include exfor ( experimental nuclear reaction data ), cinda ( computer index of nuclear data ), and ensdf ( evaluated nuclear structure data file ). the choice between a library and a database affects the accuracy of nuclear data predictions in a theoretical nuclear reactor model in several ways : 1. quality of data : nuclear data libraries contain evaluated data, which means they have undergone a thorough evaluation process to ensure their accuracy and reliability. in contrast, databases contain raw data that may not have been evaluated or validated.
|
An organism that can survive without the help of other cells is
|
[
"Brewer's yeast",
"air",
"sand",
"sugar"
] |
Key fact:
a single-cell organism can survive without the help of other cells
|
A
| 0
|
openbookqa
|
eurocarbdb was an eu - funded initiative for the creation of software and standards for the systematic collection of carbohydrate structures and their experimental data, which was discontinued in 2010 due to lack of funding. the project included a database of known carbohydrate structures and experimental data, specifically mass spectrometry, hplc and nmr data, accessed via a web interface that provides for browsing, searching and contribution of structures and data to the database. the project also produces a number of associated bioinformatics tools for carbohydrate researchers : glycanbuilder, a java applet for drawing glycan structures glycoworkbench, a standalone java application for semi - automated analysis and annotation of glycan mass spectra glycopeakfinder, a webapp for calculating glycan compositions from mass data the canonical online version of eurocarbdb was hosted by the european bioinformatics institute at www. ebi. ac. uk up to 2012, and then relax. organ. su. se. eurocarb code has since been incorporated into and extended by unicarb - db, which also includes the work of the defunct glycosuite database. references external links an online version of eurocarbdb eurocarbdb googlecode project initial publication of the eurocarb project official site for eurocarbdb reports and recommendations ( no longer active )
|
in the field of bioinformatics, a sequence database is a type of biological database that is composed of a large collection of computerized ( " digital " ) nucleic acid sequences, protein sequences, or other polymer sequences stored on a computer. the uniprot database is an example of a protein sequence database. as of 2013 it contained over 40 million sequences and is growing at an exponential rate. historically, sequences were published in paper form, but as the number of sequences grew, this storage method became unsustainable. search searching in a sequence database involves looking for similarities between a genomic / protein sequence and a query string and, finding the sequence in the database that " best " matches the target sequence ( based on criteria which vary depending on the search method ). the number of matches / hits is used to formulate a score that determines the similarity between the sequence query and the sequences in the sequence database. the main goal is to have a good balance between the two criteria. history 1950 the need for sequence databases originated in 1950 when fredrick sanger reported the primary structure of insulin. he won his second nobel prize for creating methods for sequencing nucleic acids, and his comparative approach is what sparked other protein biochemists to begin collecting amino acid sequences. thus marking the beginning of molecular databases. 1960 in 1965 margaret dayhoff and her team at the national biomedical research foundation ( nbrf ) published " the atlas of protein sequence and structure ". they put all know protein
|
saccharomyces cerevisiae ( ) ( brewer's yeast or baker's yeast ) is a species of yeast ( single - celled fungal microorganisms ). the species has been instrumental in winemaking, baking, and brewing since ancient times. it is believed to have been originally isolated from the skin of grapes. it is one of the most intensively studied eukaryotic model organisms in molecular and cell biology, much like escherichia coli as the model bacterium. it is the microorganism which causes many common types of fermentation. s. cerevisiae cells are round to ovoid, 510 m in diameter. it reproduces by budding. many proteins important in human biology were first discovered by studying their homologs in yeast ; these proteins include cell cycle proteins, signaling proteins, and protein - processing enzymes. s. cerevisiae is currently the only yeast cell known to have berkeley bodies present, which are involved in particular secretory pathways. antibodies against s. cerevisiae are found in 6070 % of patients with crohn's disease and 1015 % of patients with ulcerative colitis, and may be useful as part of a panel of serological markers in differentiating between inflammatory bowel diseases ( e. g. between ulcerative colitis and crohn's disease ), their localization and severity. etymology " saccharomyces " derives
|
A creature that is incapable of giving birth to offspring that are living as they exit is the
|
[
"bear",
"human",
"beaver",
"salamander"
] |
Key fact:
mammals give birth to live young
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
The sun setting occurs
|
[
"30 days in January",
"28 days in February",
"every other day in April",
"every third day in May"
] |
Key fact:
the sun setting occurs once per day
|
B
| 1
|
openbookqa
|
this dial shows the exact date. the numbering goes to 31, the maximum number of days in a month. in months that have fewer days ( 28, 29 or 30 ), the hand automatically moves forward to the first day of the following month. the months with 31 days are january, march, may, july, august, october and december, the months with 30 days are april, june, september and november. february is the only month with less than 30 days. february has only 28 days ( 29 days in leap years ).
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
march 0 or 0 march is an alternative name for the last day of february ( february 28, or february 29 in leap years ). it is used most often in astronomy, software engineering, and doomsday algorithm calculations.
|
Xenons use current to produce light as well as
|
[
"rainbows",
"thermal exchange",
"darkness",
"heat sinks"
] |
Key fact:
some light bulbs convert electricity into light and heat energy
|
B
| 1
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
in nuclear data evaluation and validation, a library and a database serve different purposes, but both are essential for accurate predictions in theoretical nuclear reactor models. a nuclear data library is a collection of evaluated nuclear data files that contain information about various nuclear reactions, decay processes, and other relevant properties of atomic nuclei. these libraries are created through a rigorous evaluation process that combines experimental data, theoretical models, and statistical methods to provide the best possible representation of nuclear properties. some well - known nuclear data libraries include endf ( evaluated nuclear data file ), jeff ( joint evaluated fission and fusion ), and jendl ( japanese evaluated nuclear data library ). on the other hand, a nuclear database is a structured and organized collection of raw experimental and theoretical data related to nuclear reactions and properties. these databases store information from various sources, such as experimental measurements, theoretical calculations, and simulations. they serve as a primary source of information for nuclear data evaluators when creating nuclear data libraries. examples of nuclear databases include exfor ( experimental nuclear reaction data ), cinda ( computer index of nuclear data ), and ensdf ( evaluated nuclear structure data file ). the choice between a library and a database affects the accuracy of nuclear data predictions in a theoretical nuclear reactor model in several ways : 1. quality of data : nuclear data libraries contain evaluated data, which means they have undergone a thorough evaluation process to ensure their accuracy and reliability. in contrast, databases contain raw data that may not have been evaluated or validated.
|
Coal might start out as
|
[
"a mean fairy-godmother",
"pinecones",
"happiness",
"a troll"
] |
Key fact:
coal is a nonrenewable resource
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
computational lexicons and dictionaries. in encyclopaedia of language and linguistics ( 2nd ed. ), k. r. brown, ed.
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
Which only needs sparse water?
|
[
"fish",
"frogs",
"whales",
"chuckwallas"
] |
Key fact:
some lizards live in desert habitats
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
The mantle is a layer of the Earth; what is something other than a layer?
|
[
"crust",
"inner core",
"outer core",
"lava pit"
] |
Key fact:
the mantle is a layer of the Earth
|
D
| 3
|
openbookqa
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
bilbao crystallographic server is an open access website offering online crystallographic database and programs aimed at analyzing, calculating and visualizing problems of structural and mathematical crystallography, solid state physics and structural chemistry. initiated in 1997 by the materials laboratory of the department of condensed matter physics at the university of the basque country, bilbao, spain, the bilbao crystallographic server is developed and maintained by academics. information on contents and an overview of tools hosted focusing on crystallographic data and applications of the group theory in solid state physics, the server is built on a core of databases and contains different shells. space groups retrieval tools the set of databases includes data from international tables of crystallography, vol. a : space - group symmetry, and the data of maximal subgroups of space groups as listed in international tables of crystallography, vol. a1 : symmetry relations between space groups. a k - vector database with brillouin zone figures and classification tables of the k - vectors for space groups is also available via the kvec tool. magnetic space groups in 2011, the magnetic space groups data compiled from h. t. stokes & b. j. campbell's and d. litvin's's works general positions / symmetry operations and wyckoff positions for different settings, along with systematic absence rules have also been incorporated into the server and a new shell has been dedicated to the related tools ( mgenpos, mwyckpos, magnext ). group -
|
Which is most likely a cause of camouflage's effectiveness?
|
[
"a predator's eye sight",
"a predator's sense of hearing",
"a predator's sense of style",
"a predator's poor smelling"
] |
Key fact:
camouflage is a kind of protection against predators
|
A
| 0
|
openbookqa
|
predation is a biological interaction in which one organism, the predator, kills and eats another organism, its prey. it is one of a family of common feeding behaviours that includes parasitism and micropredation ( which usually do not kill the host ) and parasitoidism ( which always does, eventually ). it is distinct from scavenging on dead prey, though many predators also scavenge ; it overlaps with herbivory, as seed predators and destructive frugivores are predators. predation behavior varies significantly depending on the organism. many predators, especially carnivores, have evolved distinct hunting strategies. pursuit predation involves the active search for and pursuit of prey, whilst ambush predators instead wait for prey to present an opportunity for capture, and often use stealth or aggressive mimicry. other predators are opportunistic or omnivorous and only practice predation occasionally. most obligate carnivores are specialized for hunting. they may have acute senses such as vision, hearing, or smell for prey detection. many predatory animals have sharp claws or jaws to grip, kill, and cut up their prey. physical strength is usually necessary for large carnivores such as big cats to kill larger prey. other adaptations include stealth, endurance, intelligence, social behaviour, and aggressive mimicry that improve hunting efficiency. predation has a powerful selective effect on prey, and the prey develops anti - predator adaptations such as warning colouration, alarm calls and other
|
perception ( from latin perceptio'gathering, receiving') is the organization, identification, and interpretation of sensory information in order to represent and understand the presented information or environment. all perception involves signals that go through the nervous system, which in turn result from physical or chemical stimulation of the sensory system. vision involves light striking the retina of the eye ; smell is mediated by odor molecules ; and hearing involves pressure waves. perception is not only the passive receipt of these signals, but it is also shaped by the recipient's learning, memory, expectation, and attention. sensory input is a process that transforms this low - level information to higher - level information ( e. g., extracts shapes for object recognition ). the following process connects a person's concepts and expectations ( or knowledge ) with restorative and selective mechanisms, such as attention, that influence perception. perception depends on complex functions of the nervous system, but subjectively seems mostly effortless because this processing happens outside conscious awareness. since the rise of experimental psychology in the 19th century, psychology's understanding of perception has progressed by combining a variety of techniques. psychophysics quantitatively describes the relationships between the physical qualities of the sensory input and perception. sensory neuroscience studies the neural mechanisms underlying perception. perceptual systems can also be studied computationally, in terms of the information they process. perceptual issues in philosophy include the extent to which sensory qualities such as sound, smell or color exist in objective reality
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
if food has lack of immediate use for energy then it will
|
[
"be discarded immediately",
"kept for later",
"left to rot",
"be thrown up"
] |
Key fact:
if food is not immediately used by the body for energy then that food will be stored for future use
|
B
| 1
|
openbookqa
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
in database technologies, a rollback is an operation which returns the database to some previous state. rollbacks are important for database integrity, because they mean that the database can be restored to a clean copy even after erroneous operations are performed. they are crucial for recovering from database server crashes ; by rolling back any transaction which was active at the time of the crash, the database is restored to a consistent state. the rollback feature is usually implemented with a transaction log, but can also be implemented via multiversion concurrency control. cascading rollback a cascading rollback occurs in database systems when a transaction ( t1 ) causes a failure and a rollback must be performed. other transactions dependent on t1's actions must also be rollbacked due to t1's failure, thus causing a cascading effect. that is, one transaction's failure causes many to fail. practical database recovery techniques guarantee cascadeless rollback, therefore a cascading rollback is not a desirable result. cascading rollback is scheduled by dba. sql sql refers to structured query language, a kind of language used to access, update and manipulate database. in sql, rollback is a command that causes all data changes since the last start transaction or begin to be discarded by the relational database management systems ( rdbms ), so that the state of the data is " rolled back " to the way it was before those changes were made. a rollback statement
|
a flat - file database is a database stored in a file called a flat file. records follow a uniform format, and there are no structures for indexing or recognizing relationships between records. the file is simple. a flat file can be a plain text file ( e. g. csv, txt or tsv ), or a binary file. relationships can be inferred from the data in the database, but the database format itself does not make those relationships explicit. the term has generally implied a small database, but very large databases can also be flat. overview plain text files usually contain one record per line. examples of flat files include / etc / passwd and / etc / group on unix - like operating systems. another example of a flat file is a name - and - address list with the fields name, address and phone number. flat files are typically either delimiter - separated or fixed - width. delimiter - separated values in delimiter - separated values files, the fields are separated by a character or string called the delimiter. common variants are comma - separated values ( csv ) where the delimiter is a comma, tab - separated values ( tsv ) where the delimiter is the tab character ), space - separated values and vertical - bar - separated values ( delimiter is | ). if the delimiter is allowed inside a field, there needs to be a way to distinguish delimiters characters or
|
The usual kind of weather in a location is called what?
|
[
"warmth",
"fog",
"zone conditions",
"visibility"
] |
Key fact:
climate is the usual kind of weather in a location
|
C
| 2
|
openbookqa
|
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
|
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
Which evaporates from its container when used?
|
[
"spray deodorant",
"pretzels",
"water",
"dog food"
] |
Key fact:
when a gas in an open container evaporates, that gas spreads out into the air
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
which of these will attract a magnet in a student's pocket?
|
[
"an old stapler pin",
"a piece of gum",
"all of these",
"a piece of chicken"
] |
Key fact:
if a magnet is attracted to a metal then that magnet will stick to that metal
|
A
| 0
|
openbookqa
|
q is a programming language for array processing, developed by arthur whitney. it is proprietary software, commercialized by kx systems. q serves as the query language for kdb +, a disk based and in - memory, column - based database. kdb + is based on the language k, a terse variant of the language apl. q is a thin wrapper around k, providing a more readable, english - like interface. one of the use cases is financial time series analysis, as one could do inexact time matches. an example is to match the a bid and the ask before that. both timestamps slightly differ and are matched anyway. overview the fundamental building blocks of q are atoms, lists, and functions. atoms are scalars and include the data types numeric, character, date, and time. lists are ordered collections of atoms ( or other lists ) upon which the higher level data structures dictionaries and tables are internally constructed. a dictionary is a map of a list of keys to a list of values. a table is a transposed dictionary of symbol keys and equal length lists ( columns ) as values. a keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. the following code demonstrates the relationships of the data structures. expressions to evaluate appear prefixed with the q ) prompt, with the output of the evaluation shown beneath : these entities are manipulated
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
Which of the following would best describe why a lizard would live in a desert?
|
[
"it can eat bugs and withstand very cold weather",
"it can eat fish and withstand very cold weather",
"it can eat bugs and withstand very hot weather",
"it can eat fish and withstand very cold weather"
] |
Key fact:
some lizards live in desert habitats
|
C
| 2
|
openbookqa
|
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
|
a poikilotherm ( ) is an animal ( greek poikilos'various ','spotted ', and therme'heat') whose internal temperature varies considerably. poikilotherms have to survive and adapt to environmental stress. one of the most important stressors is outer environment temperature change, which can lead to alterations in membrane lipid order and can cause protein unfolding and denaturation at elevated temperatures. poikilotherm is the opposite of homeotherm an animal which maintains thermal homeostasis. in principle, the term could be applied to any organism, but it is generally only applied to vertebrate animals. usually the fluctuations are a consequence of variation in the ambient environmental temperature. many terrestrial ectotherms are poikilothermic. however some ectotherms seek constant - temperature environments to the point that they are able to maintain a constant internal temperature, and are considered actual or practical homeotherms. it is this distinction that often makes the term poikilotherm more useful than the vernacular " cold - blooded ", which is sometimes used to refer to ectotherms more generally. poikilothermic animals include types of vertebrate animals, specifically some fish, amphibians, and reptiles, as well as many invertebrate animals. the naked mole - rat and sloths are some of the rare mammals which are poikilothermic. etymology the term derives from greek
|
food is chemical energy stored in organic molecules.
|
You can let someone know you are at the door thanks to a
|
[
"lemonade",
"cars",
"grass.",
"power plant"
] |
Key fact:
a doorbell converts electrical energy into sound
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
|
a vector database, vector store or vector search engine is a database that uses the vector space model to store vectors ( fixed - length lists of numbers ) along with other data items. vector databases typically implement one or more approximate nearest neighbor algorithms, so that one can search the database with a query vector to retrieve the closest matching database records. vectors are mathematical representations of data in a high - dimensional space. in this space, each dimension corresponds to a feature of the data, with the number of dimensions ranging from a few hundred to tens of thousands, depending on the complexity of the data being represented. a vector's position in this space represents its characteristics. words, phrases, or entire documents, as well as images, audio, and other types of data, can all be vectorized. these feature vectors may be computed from the raw data using machine learning methods such as feature extraction algorithms, word embeddings or deep learning networks. the goal is that semantically similar data items receive feature vectors close to each other. vector databases can be used for similarity search, semantic search, multi - modal search, recommendations engines, large language models ( llms ), object detection, etc. vector databases are also often used to implement retrieval - augmented generation ( rag ), a method to improve domain - specific responses of large language models. the retrieval component of a rag can be any search system, but is most often implemented as a vector database. text documents describing the domain of interest are collected,
|
Which of these is usually green in color?
|
[
"The Alps",
"Antarctica",
"Redwood National Park",
"The Pacific"
] |
Key fact:
a forest environment is often green in color
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
A small dish that is in space will do this as it nears an enormous mass.
|
[
"be blown up",
"be yanked in",
"be pushed away",
"be evaporated"
] |
Key fact:
as distance from an object decreases , the the pull of gravity on that object increases
|
B
| 1
|
openbookqa
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
structured query language ( sql ) ( pronounced s - q - l ; or alternatively as " sequel " ) is a domain - specific language used to manage data, especially in a relational database management system ( rdbms ). it is particularly useful in handling structured data, i. e., data incorporating relations among entities and variables. introduced in the 1970s, sql offered two main advantages over older readwrite apis such as isam or vsam. firstly, it introduced the concept of accessing many records with one single command. secondly, it eliminates the need to specify how to reach a record, i. e., with or without an index. originally based upon relational algebra and tuple relational calculus, sql consists of many types of statements, which may be informally classed as sublanguages, commonly : data query language ( dql ), data definition language ( ddl ), data control language ( dcl ), and data manipulation language ( dml ). the scope of sql includes data query, data manipulation ( insert, update, and delete ), data definition ( schema creation and modification ), and data access control. although sql is essentially a declarative language ( 4gl ), it also includes procedural elements. sql was one of the first commercial languages to use edgar f. codd's relational model. the model was described in his influential 1970 paper, " a relational model of data for large shared data banks ". despite not entirely ad
|
A boy wants to collect clams for supper and must therefore spend time
|
[
"in lake shallows",
"in desert sands",
"in rocky hills",
"in sea depths"
] |
Key fact:
clams live at the bottom of the ocean
|
D
| 3
|
openbookqa
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
seddb is an online database for sediment geochemistry. seddb is based on a relational database that contains the full range of analytical values for sediment samples, primarily from marine sediment cores, including major and trace element concentrations, radiogenic and stable isotope ratios, and data for all types of material such as organic and inorganic components, leachates, and size fractions. seddb also archives a vast array of metadata relating to the individual sample. examples of seddb metadata are : sample latitude and longitude ; elevation below sea surface ; material analyzed ; analytical methodology ; analytical precision and reference standard measurements. as of april, 2013 seddb contains nearly 750, 000 individual analytical data points of 104, 000 samples. seddb contents have been migrated to the earthchem portal. purpose seddb was developed to complement current geological data systems ( petdb, earthchem, navdat and georoc ) with an integrated and easily accessible compilation of geochemical data of marine and continental sediments to be utilized for sedimentological, geochemical, petrological, oceanographic, and paleoclimate research, as well as for educational purposes. funding and management seddb was developed, operated and maintained by a joint team of disciplinary scientists, data scientists, data managers and information technology developers at the lamontdoherty earth observatory as part of the integrated earth data applications ( ieda ) research group funded by the us national science foundation. seddb was built collaborative
|
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
|
What's used by migrating animals to find locations?
|
[
"global positioning system satellites",
"a sense of smell",
"our planet's magnetic patterns",
"the stars in the night sky"
] |
Key fact:
Earth 's magnetic patterns are used for finding locations by animals that migrate
|
C
| 2
|
openbookqa
|
the satellites that most concern us are those with a low - earth, polar orbit since geostationary satellites view the same point throughout their lifetime. the diagram shows measurements from amsu - b instruments mounted on three satellites over a period of 12 hours. this illustrates both the orbit path and the scan pattern which runs crosswise. since the orbit of a satellite is deterministic, barring orbit maneuvers, we can predict the location of the satellite at a given time and, by extension, the location of the measurement pixels.
|
earth observation satellites are designed to monitor and survey the earth, called remote sensing. most earth observation satellites are placed in low earth orbit for a high data resolution, though some are placed in a geostationary orbit for an uninterrupted coverage. some satellites are placed in a sun - synchronous orbit to have consistent lighting and obtain a total view of the earth. depending on the satellites'functions, they might have a normal camera, radar, lidar, photometer, or atmospheric instruments.
|
weather stations collect data on land and sea. weather balloons, satellites, and radar collect data in the atmosphere.
|
A penguin, while a bird, would avoid living in
|
[
"a cold pole",
"a zoo",
"a frozen habitat",
"a native forest"
] |
Key fact:
some birds live in forests
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
A vehicle stops when brakes are pressed because
|
[
"the tires are being halted by pressure",
"the streets are bumpy",
"the roads have friction",
"the tires are unable to create friction"
] |
Key fact:
friction is used for stopping a vehicle by brakes
|
A
| 0
|
openbookqa
|
two horrific contraptions on frictionless wheels are compressing a spring by compared to its uncompressed ( equilibrium ) length. each of the vehicles is stationary and they are connected by a string. the string is cut! find the speeds of the vehicles once they lose contact with the spring.
|
the circle of forces, traction circle, friction circle, or friction ellipse is a useful way to think about the dynamic interaction between a vehicle's tire and the road surface. the diagram below shows the tire from above, so that the road surface lies in the xy - plane. the vehicle to which the tire is attached is moving in the positive y direction. in this example, the vehicle would be cornering to the right ( i. e. the positive x direction points to the center of the corner ). note that the plane of rotation of the tire is at an angle to the actual direction that the tire is moving ( the positive y direction ). put differently, rather than being allowed to simply " roll " in the direction that it is " pointing " ( in this case, rightwards from the positive y direction ), the tire instead must " slip " in a different direction from that which it is pointing in order to maintain its " forward " motion in the positive y direction. this difference between the direction the tire " points " ( its plane of rotation ) and the tire's actual direction of travel is the slip angle. a tire can generate horizontal force where it meets the road surface by the mechanism of slip. that force is represented in the diagram by the vector f. note that in this example, f is perpendicular to the plane of the tire. that is because the tire is rolling freely, with no torque applied to it by the vehicle's brakes
|
friction is caused by bodies sliding over rough surfaces.
|
Butterflies will often times have coloring that at look like eyes on their wings for
|
[
"to look pretty",
"fun",
"to see flowers",
"protection"
] |
Key fact:
mimicry is used for avoiding predators by animals by camouflaging as a dangerous animal
|
D
| 3
|
openbookqa
|
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
What is likely true?
|
[
"cacti will be in snowy regions",
"cacti will be in rocky regions",
"cacti will be in sandy regions",
"cacti will be in watery regions"
] |
Key fact:
a desert environment is dry
|
C
| 2
|
openbookqa
|
steppe ecosystems are unique biomes characterized by vast, treeless grasslands with a semi - arid to arid climate. they are typically found in temperate regions, such as the eurasian steppe, the great plains of north america, and the pampas of south america. these ecosystems differ from other biomes in several ways, including climate, vegetation, and species assemblages. climate : steppe ecosystems have a semi - arid to arid climate, with low annual precipitation and high evaporation rates. this results in a water - limited environment, which influences the types of vegetation and animal species that can survive in these conditions. the temperature in steppe ecosystems can also vary greatly, with hot summers and cold winters. vegetation : due to the limited water availability, steppe ecosystems are dominated by grasses and herbaceous plants, with few trees or shrubs. these plants have adapted to the harsh conditions by developing extensive root systems to access water deep in the soil, as well as mechanisms to conserve water, such as narrow leaves and specialized photosynthetic pathways. species assemblages : the unique conditions of steppe ecosystems have led to the evolution of distinct species assemblages. many animals found in these ecosystems are adapted to the open grasslands and have developed specific traits to survive in the harsh environment. for example, large grazing herbivores like bison, antelope, and saiga have evolved to efficiently digest the tough grasses that dominate the landscape. additionally, many steppe predators, such as wolves and foxes, have
|
other dry climates get a little more precipitation. they are called steppes. these regions have short grasses and low bushes ( figure below ). steppes occur at higher latitudes than deserts. they are dry because they are in continental interiors or rain shadows.
|
in warmer regions, plants and bacteria grow faster. plants and animals weather material and produce soils. in tropical regions, where temperature and precipitation are consistently high, thick soils form. arid regions have thin soils.
|
How long would it take for you to notice the sun's movement?
|
[
"half an hour",
"15 seconds",
"half a minute",
"one nanosecond"
] |
Key fact:
the Earth rotating on its axis causes the sun to appear to move across the sky during the day
|
A
| 0
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
q is a programming language for array processing, developed by arthur whitney. it is proprietary software, commercialized by kx systems. q serves as the query language for kdb +, a disk based and in - memory, column - based database. kdb + is based on the language k, a terse variant of the language apl. q is a thin wrapper around k, providing a more readable, english - like interface. one of the use cases is financial time series analysis, as one could do inexact time matches. an example is to match the a bid and the ask before that. both timestamps slightly differ and are matched anyway. overview the fundamental building blocks of q are atoms, lists, and functions. atoms are scalars and include the data types numeric, character, date, and time. lists are ordered collections of atoms ( or other lists ) upon which the higher level data structures dictionaries and tables are internally constructed. a dictionary is a map of a list of keys to a list of values. a table is a transposed dictionary of symbol keys and equal length lists ( columns ) as values. a keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. the following code demonstrates the relationships of the data structures. expressions to evaluate appear prefixed with the q ) prompt, with the output of the evaluation shown beneath : these entities are manipulated
|
John's shadow looks like it's directly underneath him. What time is it?
|
[
"9am",
"12pm",
"12am",
"2pm"
] |
Key fact:
the sun is located directly overhead at noon
|
B
| 1
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
A frog eats
|
[
"flowers",
"grains",
"six legged creatures",
"cheeseburgers"
] |
Key fact:
a frog eats insects
|
C
| 2
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
A renewable resource you can find on Earth is
|
[
"stars",
"Oil",
"sand clay",
"fossil fuels"
] |
Key fact:
a renewable resource can be replaced
|
C
| 2
|
openbookqa
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
seddb is an online database for sediment geochemistry. seddb is based on a relational database that contains the full range of analytical values for sediment samples, primarily from marine sediment cores, including major and trace element concentrations, radiogenic and stable isotope ratios, and data for all types of material such as organic and inorganic components, leachates, and size fractions. seddb also archives a vast array of metadata relating to the individual sample. examples of seddb metadata are : sample latitude and longitude ; elevation below sea surface ; material analyzed ; analytical methodology ; analytical precision and reference standard measurements. as of april, 2013 seddb contains nearly 750, 000 individual analytical data points of 104, 000 samples. seddb contents have been migrated to the earthchem portal. purpose seddb was developed to complement current geological data systems ( petdb, earthchem, navdat and georoc ) with an integrated and easily accessible compilation of geochemical data of marine and continental sediments to be utilized for sedimentological, geochemical, petrological, oceanographic, and paleoclimate research, as well as for educational purposes. funding and management seddb was developed, operated and maintained by a joint team of disciplinary scientists, data scientists, data managers and information technology developers at the lamontdoherty earth observatory as part of the integrated earth data applications ( ieda ) research group funded by the us national science foundation. seddb was built collaborative
|
This animal evolved to reproduce using protective containers instead of live little entities:
|
[
"rhino",
"hamster",
"platypus",
"mongoose"
] |
Key fact:
some adult animals lay eggs
|
C
| 2
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
|
How long might a bear likely to remain its den without eating, drinking, or excreting after November?
|
[
"The first few weeks of December",
"Until it hears the call of the wild",
"For around twenty weeks",
"until hunters kill it for its pelt"
] |
Key fact:
hibernation is used for conserving resources by some animals
|
C
| 2
|
openbookqa
|
defaunation is the global, local, or functional extinction of animal populations or species from ecological communities. the growth of the human population, combined with advances in harvesting technologies, has led to more intense and efficient exploitation of the environment. this has resulted in the depletion of large vertebrates from ecological communities, creating what has been termed " empty forest ". defaunation differs from extinction ; it includes both the disappearance of species and declines in abundance. defaunation effects were first implied at the symposium of plant - animal interactions at the university of campinas, brazil in 1988 in the context of neotropical forests. since then, the term has gained broader usage in conservation biology as a global phenomenon. it is estimated that more than 50 percent of all wildlife has been lost in the last 40 years. in 2016, it was estimated that by 2020, 68 % of the world's wildlife would be lost. in south america, there is believed to be a 70 percent loss. a 2021 study found that only around 3 % of the planet's terrestrial surface is ecologically and faunally intact, with healthy populations of native animal species and little to no human footprint. in november 2017, over 15, 000 scientists around the world issued a second warning to humanity, which, among other things, urged for the development and implementation of policies to halt " defaunation, the poaching crisis, and the exploitation and trade of threatened species. " drivers overexploitation the
|
to analyze the trend in the population size of an endangered species over the past 10 years and predict the population size for the next 5 years, we would need to have access to the actual data. however, since we don't have the data, i will provide a general outline of the steps that would be taken to perform this analysis and suggest conservation measures. 1. data collection : gather population data for the endangered species over the past 10 years. this data can be obtained from wildlife surveys, scientific studies, or government reports. 2. data preprocessing : clean and preprocess the data to remove any inconsistencies, outliers, or missing values. 3. time series analysis : perform a time series analysis on the cleaned data to identify trends, seasonality, and other patterns in the population size. this can be done using statistical methods such as autoregressive integrated moving average ( arima ) models, exponential smoothing state space models, or machine learning techniques like long short - term memory ( lstm ) neural networks. 4. forecasting : based on the time series analysis, forecast the population size for the next 5 years. this will provide an estimate of the future population size, which can be used to assess the risk of extinction and inform conservation efforts. 5. conservation measures : based on the findings from the time series analysis and forecasting, suggest conservation measures to help protect and recover the endangered species. these measures may include : a.
|
extinction is the termination of an organism by the death of its last member. a taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. as a species'potential range may be very large, determining this moment is difficult, and is usually done retrospectively. this difficulty leads to phenomena such as lazarus taxa, where a species presumed extinct abruptly " reappears " ( typically in the fossil record ) after a period of apparent absence. over five billion species are estimated to have died out. it is estimated that there are currently around 8. 7 million species of eukaryotes globally, possibly many times more if microorganisms are included. notable extinct animal species include non - avian dinosaurs, saber - toothed cats, and mammoths. through evolution, species arise through the process of speciation. species become extinct when they are no longer able to survive in changing conditions or against superior competition. the relationship between animals and their ecological niches has been firmly established. a typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years. mass extinctions are relatively rare events ; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. only recently have extinctions been recorded with scientists alarmed at the current high rate of extinctions. most species that become extinct are
|
Two iron bars that are similarly charged will likely
|
[
"pull each other",
"touch each other",
"shove each other",
"grab each other"
] |
Key fact:
magnetism can cause objects to repel each other
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the sql select statement returns a result set of records, from one or more tables. a select statement retrieves zero or more rows from one or more database tables or database views. in most applications, select is the most commonly used data manipulation language ( dml ) command. as sql is a declarative programming language, select queries specify a result set, but do not specify how to calculate it. the database translates the query into a " query plan " which may vary between executions, database versions and database software.
|
relational transducers are a theoretical model for studying computer systems through the lens of database relations. this model extends the transducer model in formal language theory. they were first introduced in 1998 by abiteboul et al for the study of electronic commerce applications. the computation model treats the input and output as sequences of relations. the state of the transducer is a state of a database and transitions through the state machine can be thought of as updates to the database state. the model was inspired by the design of active databases and motivated by a desire to be able to express business applications declaratively via logical formulas. applications the relational transducer model has been applied to the study of computer network management, e - commerce platforms, and coordination - free distributed systems. formal specification a relational transducer has a schema made up of five components : in, state, out, db, and log. in and out represent the inputs to the system from users and the outputs back to the users respectively. db represents the contents of the database and state represents the information that the system remembers. the log contains the important subset of the inputs and outputs. the relational schemas of each component are disjoint except for log which is a subset of in out. a relational transducer over a relational transducer schema is made up of three parts : the schema a state transition function an output function related models models of computation extending on relational transducers
|
This famous mountain range in Europe came about due to
|
[
"rocks floating on underground water sources",
"the work of an intelligent force",
"the folding of numerous layers of rock",
"the accumulation of soil over millions of years"
] |
Key fact:
the Alps were formed by rock folding
|
C
| 2
|
openbookqa
|
a field is a mineral deposit containing a metal or other valuable resources in a cost - competitive concentration. it is usually used in the context of a mineral deposit from which it is convenient to extract its metallic component. the deposits are exploited by mining in the case of solid mineral deposits ( such as iron or coal ) and extraction wells in case of fluids ( such as oil, gas or brines ). description in geology and related fields a deposit is a layer of rock or soil with uniform internal features that distinguish it from adjacent layers. each layer is generally one of a series of parallel layers which lie one above the other, laid one on the other by natural forces. they may extend for hundreds of thousands of square kilometers of the earth's surface. the deposits are usually seen as a different color material groups or different structure exposed in cliffs, canyons, caves and river banks. individual agglomerates may vary in thickness from a few millimeters up to a kilometer or more. each cluster represents a specific type of deposit : flint river, sea sand, coal swamp, sand dunes, lava beds, etc. it can consist of layers of sediment, usually by marine or differentiations of certain minerals during cooling of magma or during metamorphosis of the previous rock. the mineral deposits are generally oxides, silicates and sulfates or metal not commonly concentrated in the earth's crust. the deposits must be machined to extract the metals in question from the waste rock and minerals from
|
geotechnical engineering, also known as geotechnics, is the branch of civil engineering concerned with the engineering behavior of earth materials. it uses the principles of soil mechanics and rock mechanics to solve its engineering problems. it also relies on knowledge of geology, hydrology, geophysics, and other related sciences. geotechnical engineering has applications in military engineering, mining engineering, petroleum engineering, coastal engineering, and offshore construction. the fields of geotechnical engineering and engineering geology have overlapping knowledge areas. however, while geotechnical engineering is a specialty of civil engineering, engineering geology is a specialty of geology. history humans have historically used soil as a material for flood control, irrigation purposes, burial sites, building foundations, and construction materials for buildings. dykes, dams, and canals dating back to at least 2000 bcefound in parts of ancient egypt, ancient mesopotamia, the fertile crescent, and the early settlements of mohenjo daro and harappa in the indus valleyprovide evidence for early activities linked to irrigation and flood control. as cities expanded, structures were erected and supported by formalized foundations. the ancient greeks notably constructed pad footings and strip - and - raft foundations. until the 18th century, however, no theoretical basis for soil design had been developed, and the discipline was more of an art than a science, relying on experience. several foundation - related engineering problems, such as the leaning tower of pisa, prompted scientists to begin taking a more scientific - based approach
|
groundwater is the water present beneath earth's surface in rock and soil pore spaces and in the fractures of rock formations. about 30 percent of all readily available fresh water in the world is groundwater. a unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. the depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. groundwater is recharged from the surface ; it may discharge from the surface naturally at springs and seeps, and can form oases or wetlands. groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. the study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology. typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost ( frozen soil ), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. it is likely that much of earth's subsurface contains some water, which may be mixed with other fluids in some instances. groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. therefore, it is commonly used for public drinking water supplies. for
|
On continents nearer the north pole than the south, winter months such as November see
|
[
"the most daylight",
"the longest daylight",
"growing daylight hours",
"short daylight"
] |
Key fact:
the amount of daylight is least in the winter
|
D
| 3
|
openbookqa
|
a circadian clock, or circadian oscillator, also known as ones internal alarm clock is a biochemical oscillator that cycles with a stable phase and is synchronized with solar time. such a clock's in vivo period is necessarily almost exactly 24 hours ( the earth's current solar day ). in most living organisms, internally synchronized circadian clocks make it possible for the organism to anticipate daily environmental changes corresponding with the daynight cycle and adjust its biology and behavior accordingly. the term circadian derives from the latin circa ( about ) dies ( a day ), since when taken away from external cues ( such as environmental light ), they do not run to exactly 24 hours. clocks in humans in a lab in constant low light, for example, will average about 24. 2 hours per day, rather than 24 hours exactly. the normal body clock oscillates with an endogenous period of exactly 24 hours, it entrains, when it receives sufficient daily corrective signals from the environment, primarily daylight and darkness. circadian clocks are the central mechanisms that drive circadian rhythms. they consist of three major components : a central biochemical oscillator with a period of about 24 hours that keeps time ; a series of input pathways to this central oscillator to allow entrainment of the clock ; a series of output pathways tied to distinct phases of the oscillator that regulate overt rhythms in biochemistry, physiology, and
|
a circadian rhythm ( ), or circadian cycle, is a natural oscillation that repeats roughly every 24 hours. circadian rhythms can refer to any process that originates within an organism ( i. e., endogenous ) and responds to the environment ( is entrained by the environment ). circadian rhythms are regulated by a circadian clock whose primary function is to rhythmically co - ordinate biological processes so they occur at the correct time to maximize the fitness of an individual. circadian rhythms have been widely observed in animals, plants, fungi and cyanobacteria and there is evidence that they evolved independently in each of these kingdoms of life. the term circadian comes from the latin circa, meaning " around ", and dies, meaning " day ". processes with 24 - hour cycles are more generally called diurnal rhythms ; diurnal rhythms should not be called circadian rhythms unless they can be confirmed as endogenous, and not environmental. although circadian rhythms are endogenous, they are adjusted to the local environment by external cues called zeitgebers ( from german zeitgeber ( german : [ tsateb ] ; lit.'time giver') ), which include light, temperature and redox cycles. in clinical settings, an abnormal circadian rhythm in humans is known as a circadian rhythm sleep disorder. history the earliest recorded account of a circadian process is credited to theophrastus, dating
|
most minor planets have rotation periods between 2 and 20 hours. as of 2019, a group of approximately 650 bodies, typically measuring 1 – 20 kilometers in diameter, have periods of more than 100 hours or 41⁄6 days. among the slowest rotators, there are currently 15 bodies with a period longer than 1000 hours.
|
A cacti is basically an enormous stem, which means that
|
[
"it can be drunk from",
"it is very thin",
"it holds up bright green leaves",
"it can hide in grass"
] |
Key fact:
a stem is used to store water by some plants
|
A
| 0
|
openbookqa
|
one may also consider binary trees where no leaf is much farther away from the root than any other leaf. ( different balancing schemes allow different definitions of " much farther ". )
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a ramicolous lichen is one that lives on branches. references = = = sources = = =
|
A thermal conductor is made of
|
[
"types of rubber",
"types of wire",
"electrodes",
"that which conducts"
] |
Key fact:
a thermal conductor is made of materials that conduct thermal energy
|
D
| 3
|
openbookqa
|
molecular wires ( or sometimes called molecular nanowires ) are molecular chains that conduct electric current. they are the proposed building blocks for molecular electronic devices. their typical diameters are less than three nanometers, while their lengths may be macroscopic, extending to centimeters or more. examples most types of molecular wires are derived from organic molecules. one naturally occurring molecular wire is dna. prominent inorganic examples include polymeric materials such as li2mo6se6 and mo6s9xix, [ pd4 ( co ) 4 ( oac ) 4pd ( acac ) 2 ], and single - molecule extended metal atom chains ( emacs ) which comprise strings of transition metal atoms directly bonded to each other. molecular wires containing paramagnetic inorganic moieties can exhibit kondo peaks. conduction of electrons molecular wires conduct electricity. they typically have non - linear current - voltage characteristics, and do not behave as simple ohmic conductors. the conductance follows typical power law behavior as a function of temperature or electric field, whichever is the greater, arising from their strong one - dimensional character. numerous theoretical ideas have been used in an attempt to understand the conductivity of one - dimensional systems, where strong interactions between electrons lead to departures from normal metallic ( fermi liquid ) behavior. important concepts are those introduced by tomonaga, luttinger and wigner. effects caused by classical coulomb repulsion ( called coulomb blockade ), interactions with vibrational
|
a common method of producing charge in the lab is to rub cat or rabbit fur against stiff rubber, producing a negative charge on the rubber rod. if you hold a rubber rod on one end and rub only the tip of the other end with a fur, you will find that only the tip becomes charged. the electrons you add to the tip of the rod remain where you put them instead of moving around on the rod. rubber is an insulator. insulators are substances that do not allow electrons to move through them. glass, dry wood, most plastics, cloth, and dry air are common insulators. materials that allow electrons to flow freely are called conductors. metals have at least one electron that can move around freely, and all metals are conductors.
|
metal rubber is a broad, informal name for several conductive plastic polymers with metal ions produced by nanosonic inc. in cooperation with virginia tech. this self - assembling nanocomposite is flexible and durable to high and low pressures, temperatures, tensions, and most chemical reactions, and retains all of its physical and chemical properties upon being returned to a ground state. nanosonics metal rubbertm is an electrically conductive and flexible elastomer. it can be mechanically strained to greater than 1000 % of its original dimensions while remaining electrically conductive. as metal rubber can carry data and electrical power and is environmentally rugged, it can be used as a flexible and stretchable electrical conductor in the aerospace / defense, electronics, and bioengineering markets. properties metal rubber needs to contain around 1 % metal ions to maintain its conductive properties, allowing the material to retain an elastic quality as well as keeping the heavy metal component low. metal rubber has a strain of 300 % although the sheet itself can be mechanically strained to greater than 1000 % of its original dimensions. the elastic modulus is 0. 01 gpa and the service resistivity per square sheet is. 1100 ohms. the maximum service temperature is 170 degrees celsius ( 338 degrees fahrenheit ), while the minimum service temperature is 60 degrees celsius ( 76 degrees fahrenheit ). it carries an electrical charge that can be used to transport power and data. it is typical of an
|
In which situation might most rescues be conducted via air?
|
[
"eviction",
"flooding",
"fire",
"bomb"
] |
Key fact:
heavy rains cause flooding
|
B
| 1
|
openbookqa
|
a vulnerability database ( vdb ) is a platform aimed at collecting, maintaining, and disseminating information about discovered computer security vulnerabilities. the database will customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. a vdb will assign a unique identifier to each vulnerability cataloged such as a number ( e. g. 123456 ) or alphanumeric designation ( e. g. vdb - 2020 - 12345 ). information in the database can be made available via web pages, exports, or api. a vdb can provide the information for free, for pay, or a combination thereof. history the first vulnerability database was the " repaired security bugs in multics ", published by february 7, 1973 by jerome h. saltzer. he described the list as " a list of all known ways in which a user may break down or circumvent the protection mechanisms of multics ". the list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. the published list contained two local privilege escalation vulnerabilities and three local denial of service attacks. types of vulnerability databases major vulnerability databases such as the iss x - force database, symantec / securityfocus bid database, and the open source vulnerability database ( osvdb ) aggregate a broad range of publicly disclosed vulnerabilities, including common vu
|
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
Where might you see light reflect?
|
[
"rocks",
"bottled liquid",
"sand",
"wood"
] |
Key fact:
when light hits a reflective object , that light bounces off that object
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
Which stores sustenance in the seed form?
|
[
"giraffes",
"cats",
"cattle",
"hydrangea"
] |
Key fact:
a seed is used for storing food for a new plant
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
Some animals change their appearance completely during a stage of the life cycle known as
|
[
"metamorphosing",
"metal",
"Seven",
"drawing"
] |
Key fact:
metamorphosis is a stage in the life cycle process of some animals
|
A
| 0
|
openbookqa
|
style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. style mixing can be realized by analysis of a database containing multiple musical examples in different styles.
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
in computing, the countmin sketch ( cm sketch ) is a probabilistic data structure that serves as a frequency table of events in a stream of data. it uses hash functions to map events to frequencies, but unlike a hash table uses only sub - linear space, at the expense of overcounting some events due to collisions. the countmin sketch was invented in 2003 by graham cormode and s. muthu muthukrishnan and described by them in a 2005 paper. countmin sketch is an alternative to count sketch and ams sketch and can be considered an implementation of a counting bloom filter ( fan et al., 1998 ) or multistage - filter. however, they are used differently and therefore sized differently : a countmin sketch typically has a sublinear number of cells, related to the desired approximation quality of the sketch, while a counting bloom filter is more typically sized to match the number of elements in the set. data structure the goal of the basic version of the countmin sketch is to consume a stream of events, one at a time, and count the frequency of the different types of events in the stream. at any time, the sketch can be queried for the frequency of a particular event type i from a universe of event types u { \ displaystyle { \ mathcal { u } } }, and will return an estimate of this frequency that is within a certain distance of the true frequency, with a certain probability. the
|
Hummingbirds gather nectar using their
|
[
"bills",
"wings",
"hands",
"feet"
] |
Key fact:
a skinny beak is used for obtaining food by a bird from small spaces
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a cost database is a computerized database of cost estimating information, which is normally used with construction estimating software to support the formation of cost estimates. a cost database may also simply be an electronic reference of cost data. overview a cost database includes the electronic equivalent of a cost book, or cost reference book, a tool used by estimators for many years. cost books may be internal records at a particular company or agency, or they may be commercially published books on the open market. aec teams and federal agencies can and often do collect internally sourced data from their own specialists, vendors, and partners. this is valuable personalized cost data that is captured but often doesn't cover the same range that commercial cost book data can. internally sourced data is difficult to maintain and do not have the same level of developed user interface or functionalities as a commercial product. the cost database may be stored in relational database management system, which may be in either an open or proprietary format, serving the data to the cost estimating software. the cost database may be hosted in the cloud. estimators use a cost database to store data in structured way which is easy to manage and retrieve. details costing data the most basic element of a cost estimate and therefore the cost database is the estimate line item or work item. an example is " concrete, 4000 psi ( 30 mpa ), " which is the description of the item. in the cost database, an item is a row or record in
|
the flat ( or table ) model consists of a single, two - dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. for instance, columns for name and password that might be used as a part of a system security database. each row would have the specific password associated with an individual user. columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. this tabular format is a precursor to the relational model.
|
A thing which is a producer in the food chain is most likely to be a
|
[
"squid",
"clover",
"mouse",
"hawk"
] |
Key fact:
producer is a kind of role in the food chain process
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
|
If a park becomes more and more arid, the animals there will probably
|
[
"start increasing in population",
"have more food available",
"have less to drink",
"start becoming more friendly"
] |
Key fact:
as dryness increases in an environment , the available water in that environment will decrease
|
C
| 2
|
openbookqa
|
moving into an area, or immigration, is a key factor in the growth of populations. shown above is actual vintage luggage left by some of the millions of immigrants who came through ellis island and into the united states.
|
consumers take in food by eating producers or other living things.
|
for example, a college student who is doing a term project and wants to know the average consumption of soda in that college town on friday night will most probably call some of his friends and ask them how many cans of soda they drink, or go to a nearby party to do an easy survey. there is always a trade - off between this method of quick sampling and accuracy. collected samples may not represent the population of interest and therefore be a source of bias.
|
A plant's roots break down rocks as the roots do what?
|
[
"decay",
"grow old",
"develop",
"dcerease"
] |
Key fact:
a plant 's roots slowly break down rocks as the roots grow
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
The state of a gas vaporized from a surface liquid under increased pressure and higher temperatures is
|
[
"part of how nature replenishes, purifies and recirculates water",
"a stage when water molecules are prevented from escaping into the atmosphere",
"a stage when moisture leaves the atmosphere without being recirculated",
"a stage that only occurs once"
] |
Key fact:
evaporation is a stage in the water cycle process
|
A
| 0
|
openbookqa
|
water also moves through the living organisms in an ecosystem. plants soak up large amounts of water through their roots. the water then moves up the plant and evaporates from the leaves in a process called transpiration. the process of transpiration, like evaporation, returns water back into the atmosphere.
|
figure 46. 14 water from the land and oceans enters the atmosphere by evaporation or sublimation, where it condenses into clouds and falls as rain or snow. precipitated water may enter freshwater bodies or infiltrate the soil. the cycle is complete when surface or groundwater reenters the ocean. ( credit : modification of work by john m. evans and howard perlman, usgs ).
|
the biogeochemical cycle that recycles water is the water cycle. the water cycle involves a series of interconnected pathways involving both the biotic and abiotic components of the biosphere. water is obviously an extremely important aspect of every ecosystem. life cannot exist without water. many organisms contain a large amount of water in their bodies, and many live in water, so the water cycle is essential to life on earth. water continuously moves between living organisms, such as plants, and non - living things, such as clouds, rivers, and oceans ( figure below ).
|
Fluids are relocated through a daffodil due to the
|
[
"ears",
"eyes",
"mouth",
"shoot"
] |
Key fact:
a plant stem is the vehicle for transporting water and food from roots to the rest of the plant
|
D
| 3
|
openbookqa
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
the ki database ( or ki db ) is a public domain database of published binding affinities ( ki ) of drugs and chemical compounds for receptors, neurotransmitter transporters, ion channels, and enzymes. the resource is maintained by the university of north carolina at chapel hill and is funded by the nimh psychoactive drug screening program and by a gift from the heffter research institute. as of april 2010, the database had data for 7 449 compounds at 738 different receptors and, as of 27 april 2018, 67 696 ki values. the ki database has data useful for both chemical biology and chemogenetics. external links description search form bindingdb. org - a similar publicly available database
|
computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment. overview in contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large - scale analytics ( " big data " ) rather than inferential statistics and small - cohort hypothesis testing. the aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. research and development to implement more effective diagnostics and treatments represent translational work that supports this aim. for people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e - health / mhealth apps. for care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway. the field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology. applications in computational audiology, models and
|
35 percent of what depends on pollination?
|
[
"flowers",
"people",
"crops",
"bees"
] |
Key fact:
pollination requires pollinating animals
|
C
| 2
|
openbookqa
|
beebase was an online bioinformatics database that hosted data related to apis mellifera, the european honey bee along with some pathogens and other species. it was developed in collaboration with the honey bee genome sequencing consortium. in 2020 it was archived and replaced by the hymenoptera genome database. data and services biological data and services available on beebase included : dna and protein sequence data official bee gene set ( developed by and hosted at beebase ) genome browser linkage maps server to search the honey bee genome using blast services in feb 2007, beebase consisted of a gbrowser - based genome viewer and a cmap - based comparative map viewer, both modules of the generic model organism database ( gmod ) project. the genome viewer included tracks for known honey bee genes, predicted gene sets ( ensembl, ncbi, embl - heidelberg ), sts markers ( solignac and hunt linkage maps ), honey bee expressed sequence tags ( ests ), homologs in fruit fly, mosquito and other insects and transposable elements. the honey bee comparative map viewer displayed linkage maps and the physical map ( genome assembly ), highlighting markers that are common among maps. additionally, a qtl viewer and a gene expression database were planned. the genome sequence was to serve as a reference to link these diverse data types. beebase organized the community annotation of the bee genome in collaboration with baylor college of medicine human genome sequencing center.
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
Without the sun providing warmth and light, life on Earth would
|
[
"nothing",
"flying",
"an impossibility",
"Mars"
] |
Key fact:
the sun is the source of energy for life on Earth
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
an uncertain database is a kind of database studied in database theory. the goal of uncertain databases is to manage information on which there is some uncertainty. uncertain databases make it possible to explicitly represent and manage uncertainty on the data, usually in a succinct way. formal definition at the basis of uncertain databases is the notion of possible world. specifically, a possible world of an uncertain database is a ( certain ) database which is one of the possible realizations of the uncertain database. a given uncertain database typically has more than one, and potentially infinitely many, possible worlds. a formalism to represent uncertain databases then explains how to succinctly represent a set of possible worlds into one uncertain database. types of uncertain databases uncertain database models differ in how they represent and quantify these possible worlds : incomplete databases are a compact representation of the set of possible worlds the use of null in sql, arguably the most commonplace instantiation of uncertain databases, is an example of incomplete database model. probabilistic databases are a compact representation of a probability distribution over the set of possible worlds. fuzzy databases are a compact representation of a fuzzy set of the possible worlds. though mostly studied in the relational setting, uncertain database models can also be defined in other relational models such as graph databases or xml databases. incomplete database the most common database model is the relational model. multiple incomplete database models have been defined over the relational model, that form extensions to the relational algebra. these have been called imieliskilipski
|
Which of these is determined by heredity?
|
[
"If a plant produces gymnosperms",
"If someone has blue contacts in their eyes",
"If a child can build tall walls",
"If a plant receives enough sunlight"
] |
Key fact:
the type of seed of a plant is an inherited characteristic
|
A
| 0
|
openbookqa
|
gymnosperms are vascular plants that produce seeds in cones. examples include conifers such as pine and spruce trees. the gymnosperm life cycle has a dominant sporophyte generation. both gametophytes and the next generation ’ s new sporophytes develop on the sporophyte parent plant. figure below is a diagram of a gymnosperm life cycle.
|
gymnosperms are vascular plants that produce seeds in cones. examples include conifers such as pine and spruce trees. the gymnosperm life cycle has a very dominant sporophyte generation. both gametophytes and the next generation ’ s new sporophytes develop on the sporophyte parent plant. figure below is a diagram of a gymnosperm life cycle.
|
plant ontology ( po ) is a collection of ontologies developed by the plant ontology consortium. these ontologies describe anatomical structures and growth and developmental stages across viridiplantae. the po is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education. project members oregon state university new york botanical garden l. h. bailey hortorium at cornell university ensembl soybase sswap sgn gramene the arabidopsis information resource ( tair ) maizegdb university of missouri at st. louis missouri botanical garden see also generic model organism database open biomedical ontologies obo foundry references external links plant ontology consortium gramene tair maizegdb nasc soybase
|
Two felled trees, split in twain, with the same number of rings, means they're the same
|
[
"emotionally",
"crab",
"type of tree",
"age"
] |
Key fact:
a tree growing a tree-growth ring occurs once per year
|
D
| 3
|
openbookqa
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
_____ tide is a stage in the tide cycle process that surfers like the best
|
[
"low",
"mud",
"high",
"earthy"
] |
Key fact:
high tide is a stage in the tide cycle process
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
braille is read by using your fingers to
|
[
"hold the book",
"hold the flashlight",
"turn the page",
"feel the words"
] |
Key fact:
the shape of an object can be discovered through feeling that object
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
/ books.
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
You can bake cookies with the sun using
|
[
"melting ice",
"aluminium foil",
"blocks of ice",
"soft cheese"
] |
Key fact:
a thermal energy conductor transfers heat from hotter objects to cooler objects
|
B
| 1
|
openbookqa
|
swiss cheese model = = references = =
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
Organisms exist only because of the energy from
|
[
"the moon",
"coffee",
"our yellow dwarf",
"the kardashians"
] |
Key fact:
the sun is the source of energy for life on Earth
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
q is a programming language for array processing, developed by arthur whitney. it is proprietary software, commercialized by kx systems. q serves as the query language for kdb +, a disk based and in - memory, column - based database. kdb + is based on the language k, a terse variant of the language apl. q is a thin wrapper around k, providing a more readable, english - like interface. one of the use cases is financial time series analysis, as one could do inexact time matches. an example is to match the a bid and the ask before that. both timestamps slightly differ and are matched anyway. overview the fundamental building blocks of q are atoms, lists, and functions. atoms are scalars and include the data types numeric, character, date, and time. lists are ordered collections of atoms ( or other lists ) upon which the higher level data structures dictionaries and tables are internally constructed. a dictionary is a map of a list of keys to a list of values. a table is a transposed dictionary of symbol keys and equal length lists ( columns ) as values. a keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. the following code demonstrates the relationships of the data structures. expressions to evaluate appear prefixed with the q ) prompt, with the output of the evaluation shown beneath : these entities are manipulated
|
kbpedia combines multiple knowledge bases to harness their complementary strengths and address their individual limitations. wikipedia and wikidata, for example, provide extensive, up - to - date knowledge that is both human - and machine - readable. by integrating these with schema. org and other structured information sources, kbpedia can create a more robust and semantically rich framework. this integration allows for better interoperability among different data sets and enhances the overall depth and quality of the knowledge graph. this results in improved semantic tasks, such as entity recognition, semantic search, and data linking, making kbpedia a powerful tool for researchers and developers working with large - scale data sets.
|
As a pond is slowly evaporated, the food for ducks in that area
|
[
"is tasteless",
"is more",
"is booming",
"is less"
] |
Key fact:
as available water in an environment decreases , the amount of available food in that environment will decrease
|
D
| 3
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
If it's been twelve weeks since you put on shorts, and you notice today the hours of light and dark are just about equal, how long will it be until Winter?
|
[
"just over a year",
"around thirteen weeks or so",
"when G.R.R. Martin says it's here",
"when the groundhog sees its shadow"
] |
Key fact:
a new season occurs once per three months
|
B
| 1
|
openbookqa
|
data shadows refer to the information that a person leaves behind unintentionally while taking part in daily activities such as checking their e - mails, scrolling through social media or even by using their debit or credit card. the term data shadow was coined in 1972 by kerstin anr, a member of the swedish legislature. the generated information has the potential to create a vastly detailed record of an individual's daily trails, which includes the individual's thoughts and interests, whom they communicate with, information about the organizations with which they work or interact with and so forth. this information can be dispersed to a dozen organizations and servers depending on their use. along with individuals, the activities of institutions and organizations are also tracked. data shadows are closely linked with data footprints, which are defined as the data that has been left behind by the individual themselves through various activities such as online activities, communication information, and transactions. in a chapter for the book geography and technology, researcher matthew zook and his co - authors note that data shadows have come as a result of people becoming " digital individuals " and that these shadows are continually evolving and changing. they are used to model and predict political opinions, and make inferences about a person's political values or susceptibility to advertising. digital footprint the data or digital footprints are obtained from monitoring and tracking individuals digital activities. digital footprints provide a drive for companies such as facebook and google to invest in obtaining data generated from these footprints, in
|
he examines the rhind papyrus, the moscow papyrus and explores their understanding of binary numbers, fractions and solid shapes. he then travels to babylon and discovered that the way we tell the time today is based on the babylonian 60 base number system. so because of the babylonians we have 60 seconds in a minute, and 60 minutes in an hour.
|
a mental timeline is a mental representation of time that is spatial in nature. the mental timeline is similar to the mental number line where numbers are perceived increasing left to right. earlier time periods ( the past ) are associated with the left side of space and later time periods ( the future ) with the right. it is typically thought of as being presented left to right for populations who read left to right ( e. g. english ) and right to left for populations who read right to left ( e. g. arabic ). rationale one rationale behind the connection between time and space is that space is an easier concept to understand than time. space is three dimensional and can be perceived directly using visual sensors, that is, we can physically see a space. in contrast time is one dimensional and can only be perceived indirectly, for example, by seeing that a person has aged, we can infer that time has passed however we do not physically see time. there are many other examples of spatial representations of time around the world such as clocks, calendars and hourglasses. this reasoning is also given for the use of spatial metaphors connecting the intangible concept of time with a more solid concept of space. spatial metaphors such as, theres a big day ahead or put the past behind you are common colloquialisms that represent the idea of a mental timeline. they provide linguistic evidence that our cognitive perception of time is linked to our representation of space. evidence linguistic evidence evidence
|
As the population of zebra decreases
|
[
"competition among African wild dogs increases",
"predators learn to swim",
"prey communities get larger",
"prey will be more visible"
] |
Key fact:
as the population of prey decreases , competition between predators will increase
|
A
| 0
|
openbookqa
|
the convict cichlids ( archocentrus nigrofasciatus ) have been observed in two different contexts, high and low - risk predatory settings. these fish behave with less anti - predator and foraging behaviours when located in dangerous predatory areas, high - risk compared to the low - risk zones. these behaviour adjustments in various contexts support the risk allocation hypothesis since the animals follow its assumptions.
|
the behavior of predators has a significant impact on the behavior of their prey. predators exert selective pressure on prey populations, leading to the evolution of various adaptations in prey species that help them avoid being caught. these adaptations can be morphological, physiological, or behavioral. behavioral adaptations in prey species often involve changes in their activity patterns, habitat use, or social behavior to minimize the risk of predation. for example, some prey species may become more vigilant, spending more time scanning their surroundings for potential threats. others may alter their foraging habits, feeding at times or in locations where predators are less active. some prey species may also form groups or herds, which can provide increased protection from predators through collective vigilance and the " dilution effect, " where the chances of any individual being caught are reduced as the group size increases. one example of a predator - prey interaction where the behavior of the prey changes to avoid being caught is the relationship between african lions ( predators ) and impalas ( prey ). impalas are known for their incredible agility and speed, which they use to evade predators like lions. when faced with a potential threat, impalas will often perform a behavior called " stotting " or " pronking, " which involves jumping high into the air with all four legs held stiffly. this display serves multiple purposes : it demonstrates the impala's physical fitness and ability to escape, potentially deterring the predator from pursuing ; it also alerts other
|
predation is a biological interaction in which one organism, the predator, kills and eats another organism, its prey. it is one of a family of common feeding behaviours that includes parasitism and micropredation ( which usually do not kill the host ) and parasitoidism ( which always does, eventually ). it is distinct from scavenging on dead prey, though many predators also scavenge ; it overlaps with herbivory, as seed predators and destructive frugivores are predators. predation behavior varies significantly depending on the organism. many predators, especially carnivores, have evolved distinct hunting strategies. pursuit predation involves the active search for and pursuit of prey, whilst ambush predators instead wait for prey to present an opportunity for capture, and often use stealth or aggressive mimicry. other predators are opportunistic or omnivorous and only practice predation occasionally. most obligate carnivores are specialized for hunting. they may have acute senses such as vision, hearing, or smell for prey detection. many predatory animals have sharp claws or jaws to grip, kill, and cut up their prey. physical strength is usually necessary for large carnivores such as big cats to kill larger prey. other adaptations include stealth, endurance, intelligence, social behaviour, and aggressive mimicry that improve hunting efficiency. predation has a powerful selective effect on prey, and the prey develops anti - predator adaptations such as warning colouration, alarm calls and other
|
Which of the following would occur in nature?
|
[
"an osprey catches a fish with claws",
"a fish catches a fish with its claws",
"a dog catches a fish with its claws",
"a worm catches a fish with its claws"
] |
Key fact:
claws are used to catch prey by some predators
|
A
| 0
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
the world register of marine species ( worms ) is a taxonomic database that aims to provide an authoritative and comprehensive catalogue and list of names of marine organisms. content the content of the registry is edited and maintained by scientific specialists on each group of organism. these taxonomists control the quality of the information, which is gathered from the primary scientific literature as well as from some external regional and taxon - specific databases. worms maintains valid names of all marine organisms, but also provides information on synonyms and invalid names. it is an ongoing task to maintain the registry, since new species are constantly being discovered and described by scientists ; in addition, the nomenclature and taxonomy of existing species is often corrected or changed as new research is constantly being published. subsets of worms content are made available, and can have separate badging and their own home / launch pages, as " subregisters ", such as the world list of marine acanthocephala, world list of actiniaria, world amphipoda database, world porifera database, and so on. as of december 2018 there were 60 such taxonomic subregisters, including a number presently under construction. a second category of subregisters comprises regional species databases such as the african register of marine species, belgian register of marine species, etc., while a third comprises thematic subsets such as the world register of deep - sea species ( wordss ), world register of introduced marine species ( wrims ), etc
|
there are about 27, 000 species of bony fish ( figure below ), which are divided into two classes : ray - finned fish and lobe - finned fish. most bony fish are ray - finned. these thin fins consist of webs of skin over flexible spines. lobe - finned fish, on the other hand, have fins that resemble stump - like appendages.
|
Peddling a bicycle can be accomplished by
|
[
"dolphins",
"dogs",
"possessors of thumbs",
"emus"
] |
Key fact:
a human can pedal a bicycle
|
C
| 2
|
openbookqa
|
the vertebrate genome annotation ( vega ) database is a biological database dedicated to assisting researchers in locating specific areas of the genome and annotating genes or regions of vertebrate genomes. the vega browser is based on ensembl web code and infrastructure and provides a public curation of known vertebrate genes for the scientific community. the vega website is updated frequently to maintain the most current information about vertebrate genomes and attempts to present consistently high - quality annotation of all its published vertebrate genomes or genome regions. vega was developed by the wellcome trust sanger institute and is in close association with other annotation databases, such as zfin ( the zebrafish information network ), the havana group and genbank. manual annotation is currently more accurate at identifying splice variants, pseudogenes, polyadenylation features, non - coding regions and complex gene arrangements than automated methods. history the vertebrate genome annotation ( vega ) database was first made public in 2004 by the wellcome trust sanger institute. it was designed to view manual annotations of human, mouse and zebrafish genomic sequences, and it is the central cache for genome sequencing centers to deposit their annotation of human chromosomes. manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. annotation tools
|
dolphins are mammals that have adapted to swimming and reproducing in water.
|
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
|
A rose bush photosynthesizes and nearby people
|
[
"are stung by bees",
"visit a foreign country",
"get new fresh air",
"drink more iced tea"
] |
Key fact:
In the photosynthesis process oxygen has the role of waste product
|
C
| 2
|
openbookqa
|
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
|
beebase was an online bioinformatics database that hosted data related to apis mellifera, the european honey bee along with some pathogens and other species. it was developed in collaboration with the honey bee genome sequencing consortium. in 2020 it was archived and replaced by the hymenoptera genome database. data and services biological data and services available on beebase included : dna and protein sequence data official bee gene set ( developed by and hosted at beebase ) genome browser linkage maps server to search the honey bee genome using blast services in feb 2007, beebase consisted of a gbrowser - based genome viewer and a cmap - based comparative map viewer, both modules of the generic model organism database ( gmod ) project. the genome viewer included tracks for known honey bee genes, predicted gene sets ( ensembl, ncbi, embl - heidelberg ), sts markers ( solignac and hunt linkage maps ), honey bee expressed sequence tags ( ests ), homologs in fruit fly, mosquito and other insects and transposable elements. the honey bee comparative map viewer displayed linkage maps and the physical map ( genome assembly ), highlighting markers that are common among maps. additionally, a qtl viewer and a gene expression database were planned. the genome sequence was to serve as a reference to link these diverse data types. beebase organized the community annotation of the bee genome in collaboration with baylor college of medicine human genome sequencing center.
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
One arctic animal, the polar bear may spend their time doing what
|
[
"racing",
"swimming",
"creating",
"reading"
] |
Key fact:
arctic animals live in an arctic environment
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
clustering and diversifying web search results with graph - based word sense induction. computational linguistics, 39 ( 3 ), mit press, 2013, pp. 709 – 754.
|
computational lexicons and dictionaries. in encyclopaedia of language and linguistics ( 2nd ed. ), k. r. brown, ed.
|
If soil is loose, it makes it what for oxygen to get in?
|
[
"seven",
"boring",
"harder",
"simpler"
] |
Key fact:
the looseness of soil increases the amount of oxygen in that soil
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
To measure the length of a elephant's trunk you would need
|
[
"a tape measure",
"a tusk",
"a scale",
"a pool"
] |
Key fact:
a meter stick is used to measure length
|
A
| 0
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
the context of count data.
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
giant walls of froze H2O carved out
|
[
"the great plains",
"the pacific ocean",
"the great lakes",
"the grand canyon"
] |
Key fact:
the Great Lakes were formed by glaciers moving over the ground
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
|
the web - based map collection includes :
|
Unlike CO2, oxygen is a waste product of
|
[
"the moon",
"rocks",
"cactus",
"hair"
] |
Key fact:
In the photosynthesis process oxygen has the role of waste product
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
Clothes fresh out of the laundry are great because they are
|
[
"darkness",
"comfort",
"temperate",
"wonder"
] |
Key fact:
a hot something is a source of heat
|
C
| 2
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
Spring may be sprung when
|
[
"winter is coming",
"our globe shifts",
"summer is here",
"snakes hibernate"
] |
Key fact:
Earth 's tilt on its axis causes seasons to occur
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. the common adjective used for daytime activity is " diurnal ". the timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. diurnality is a cycle of activity within a 24 - hour period ; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral. plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. the timing of flower opening is often related to the time at which preferred pollinators are foraging. for example, sunflowers open during the day to attract bees, whereas the night - blooming cereus opens at night to attract large sphinx moths. animals many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. commonly classified diurnal animals include mammals, birds, and reptiles. most primates are diurnal, including humans. scientifically classifying diurnality within animals
|
mycobank is an online database, documenting new mycological names and combinations, eventually combined with descriptions and illustrations. it is run by the westerdijk fungal biodiversity institute in utrecht. each novelty, after being screened by nomenclatural experts and found in accordance with the icn ( international code of nomenclature for algae, fungi, and plants ), is allocated a unique mycobank number before the new name has been validly published. this number then can be cited by the naming author in the publication where the new name is being introduced. only then, this unique number becomes public in the database. by doing so, this system can help solve the problem of knowing which names have been validly published and in which year. mycobank is linked to other important mycological databases such as index fungorum, life science identifiers, global biodiversity information facility ( gbif ) and other databases. mycobank is one of three nomenclatural repositories recognized by the nomenclature committee for fungi ; the others are index fungorum and fungal names. mycobank has emerged as the primary registration system for new fungal taxa and nomenclatural acts. according to a 2021 analysis of taxonomic innovations in lichen and allied fungi between 20182020, 97. 7 % of newly described taxa and 76. 5 % of new combinations obtained their registration numbers from mycobank, suggesting broad adoption by the mycological community. the system
|
When a plant is watered, spraying water on leaves is less useful than
|
[
"spritzing the stem of the plant",
"putting the plant in the rain",
"using a sprinkler system",
"pouring water on soil"
] |
Key fact:
roots are a vehicle for absorbing water and nutrients from soil into the plant
|
D
| 3
|
openbookqa
|
in hydrology, stemflow is the flow of intercepted water down the trunk or stem of a plant. stemflow, along with throughfall, is responsible for the transferral of precipitation and nutrients from the canopy to the soil. in tropical rainforests, where this kind of flow can be substantial, erosion gullies can form at the base of the trunk. however, in more temperate climates stemflow levels are low and have little erosional power. measurement there are a variety of ways stemflow volume is measured in the field. the most common direct measurement currently used is the bonding of bisected pvc or other plastic tubing around the circumference of the tree trunk, connected and funneled into a graduated cylinder for manual or a tipping bucket rain gauge for automatic collection. at times the tubing is wrapped multiple times around the trunk is order to ensure more complete collection. determining factors precipitation the primary meteorological characteristics of a rainfall event that influence stemflow are : rainfall continuity the more frequent and extended are the gaps during the event where no rainfall occurs, the higher the likelihood that potential stemflow volume is lost to evapotranspiration ; this is also governed by air temperature, relative humidity and most significantly, wind speed rainfall intensity the amount of total stemflow is diminished when the amount of rain in a given period surpasses the capacity of the flow paths rain angle stemflow generally starts earlier when rainfall is more horizontal ; this is more of a determinant in an
|
the functions of the stem are to raise and support the leaves and reproductive organs above the level of the soil, to facilitate absorption of light for photosynthesis, gas exchange, water exchange ( transpiration ), pollination, and seed dispersal. the stem also serves as a conduit, from roots to overhead structures, for water and other growth - enhancing substances. these conduits consist of specialised tissues known as vascular bundles, which give the name " vascular plants " to the angiosperms.
|
liquid water is taken up by plant roots. the plant releases water vapor into the atmosphere. this is transpiration.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.