question
string
options
list
rationale
string
label
string
label_idx
int64
dataset
string
chunk1
string
chunk2
string
chunk3
string
A month that is in the summer in the Northern Hemisphere directly follows which month?
[ "March", "November", "January", "May" ]
Key fact: June is during the summer in the northern hemisphere
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
If you plug this into an outlet, current will flow through it.
[ "sword tip", "rubber", "plastic", "air." ]
Key fact: sending electricity through a conductor causes electric current to flow through that conductor
A
0
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
Which would likely cause an injury to a baby?
[ "a bath", "a pacifier", "a bottle", "a wasp" ]
Key fact: a stinger is used for defense by a wasp
D
3
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
Chimpanzees dig for insects with sticks; what is another example of using tools?
[ "birds using twigs to build nests", "otters using rocks to open clams", "anteaters using their tongue to catch ants", "koalas using their pouch to hold infants" ]
Key fact: An example of using tools is a chimpanzee digging for insects with a stick
B
1
openbookqa
the nest itself is usually built in tall trees ; this species may also use tree holes as a possible nesting option, although not yet recorded for this species and its breeding habits. = = references = =
distraction displays, also known as diversionary displays, or paratrepsis are anti - predator behaviors used to attract the attention of an enemy away from something, typically the nest or young, that is being protected by a parent. distraction displays are sometimes classified more generically under " nest protection behaviors " along with aggressive displays such as mobbing. these displays have been studied most extensively in bird species, but also have been documented in populations of stickleback fish and in some mammal species. distraction displays frequently take the form of injury - feigning. however, animals may also imitate the behavior of a small rodent or alternative prey item for the predator ; imitate young or nesting behaviors such as brooding ( to cause confusion as to the true location of the nest ), mimic foraging behaviors away from the nest, or simply draw attention to oneself. evolution origin the behaviour was first described by aristotle in his history of animals. david lack postulated that distraction displays simply resulted from the bird's alarm at having been flushed from the nest and had no decoy purpose. he noted a case in the european nightjar, when a bird led him around the nest several times but made no attempt to lure him away. he additionally noted courtship displays mixed with the distraction displays of the bird, suggesting that distraction display is not a purposeful action unto itself, and observed that the display became less vigorous the more frequently he visited the nest, as would be expected if the display were a response driven by fear and
cameras can be used to study bird to monitor nest and record information about nest survival, nesting behaviors, or even to catch nest predators in the act. the timing of breeding in relation to weather variables can be studied, as well as the size of eggs and chicks in relation to food quality and abundance. records of habitat variables at each nest provide helpful information on the birds ’ nest site selection criteria, and maps of all nests found in a study area allow for examination of how territories are distributed through the habitat.
A micro-scratch test is able to determine which rock formations deep inside the earth's sub-surface are the hardest and how resistant the formations are to being
[ "scratched, fractured or otherwise deformed", "transformed into different minerals", "melted and otherwise liquified", "scratched with ice breakers" ]
Key fact: heat and pressure change the remains of prehistoric living things into natural gas
A
0
openbookqa
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
freeze - fracture is a natural occurrence leading to processes like erosion of the earths crust or simply deterioration of food via freeze - thaw cycles. to investigate the process further freeze - fracture is artificially induced to view in detail the properties of materials. fracture during freezing is often the result of crystallizing water which results in expansion. crystallization is also a factor leading to chemical changes of a substance due to changes in the crystal surroundings called eutectic formation. imaging the fractured surface of a frozen substance allows the interior of the structure to be investigated as illustrated by the picture of a fractured piece of glacier called an iceberg. by photographing at high magnifications more can be learnt about the fractured object's substructure and the changes in the object that occur during freezing. when imaging fractured surfaces in detail, changes occurring during and immediately after fracture as well as sample preparation, must be taken into account if trying to infer the unbroken material's structure. the often relatively cold temperatures needed to make an object solid enough to fracture, and the fracture process itself, stress and deform the material. imaging of fine detail under sub - zero conditions is difficult. the material will start to warm again when removed to a position for photography. ambient gases, often water vapor, will condense on the cold surfaces, reacting with them, obscuring detail and further warming the object allowing it to reshape. freezing considerations freezing of a substance is a relative term, often
Which relationship is true?
[ "wind is renewable; metal is nonrenewable", "wind is recyclable, metal is other", "wind is happy, metal is other", "wind is nonrenewable; metal is renewable" ]
Key fact: metal is a nonrenewable resource
A
0
openbookqa
from a human point of view, natural resources can be classified as either renewable or nonrenewable. renewable resources, such as sunlight and living things, can be remade quickly by natural processes. nonrenewable resources, such as fossil fuels and soil, cannot be remade or else take millions of years to remake.
a non - renewable resource ( also called a finite resource ) is a natural resource that cannot be readily replaced by natural means at a pace quick enough to keep up with consumption. an example is carbon - based fossil fuels. the original organic matter, with the aid of heat and pressure, becomes a fuel such as oil or gas. earth minerals and metal ores, fossil fuels ( coal, petroleum, natural gas ) and groundwater in certain aquifers are all considered non - renewable resources, though individual elements are always conserved ( except in nuclear reactions, nuclear decay or atmospheric escape ). conversely, resources such as timber ( when harvested sustainably ) and wind ( used to power energy conversion systems ) are considered renewable resources, largely because their localized replenishment can also occur within human lifespans. earth minerals and metal ores earth minerals and metal ores are examples of non - renewable resources. the metals themselves are present in vast amounts in earth's crust, and their extraction by humans only occurs where they are concentrated by natural geological processes ( such as heat, pressure, organic activity, weathering and other processes ) enough to become economically viable to extract. these processes generally take from tens of thousands to millions of years, through plate tectonics, tectonic subsidence and crustal recycling. the localized deposits of metal ores near the surface which can be extracted economically by humans are non - renewable in human time - frames. there are certain rare earth minerals and elements that are more
renewable resources are natural resources that are remade by natural processes as quickly as people use them. examples of renewable resources include sunlight and wind. they are in no danger of being used up. metals and some other minerals are considered renewable as well because they are not destroyed when they are used. instead, they can be recycled and used over and over again.
Two females usually can produce
[ "children", "a progeny", "offspring", "nothing" ]
Key fact: two females can not usually reproduce with each other
D
3
openbookqa
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
edas was a database of alternatively spliced human genes. it doesn't seem to exist anymore. see also aspicdb database references external links http : / / www. gene - bee. msu. ru / edas /.
in bioinformatics, a gene disease database is a systematized collection of data, typically structured to model aspects of reality, in a way to comprehend the underlying mechanisms of complex diseases, by understanding multiple composite interactions between phenotype - genotype relationships and gene - disease mechanisms. gene disease databases integrate human gene - disease associations from various expert curated databases and text mining derived associations including mendelian, complex and environmental diseases. introduction experts in different areas of biology and bioinformatics have been trying to comprehend the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. for some illnesses, it has become apparent that it is the right amount of animosity is made for not enough to obtain an index of the disease - related genes but to uncover how disruptions of molecular grids in the cell give rise to disease phenotypes. moreover, even with the unprecedented wealth of information available, obtaining such catalogues is extremely difficult. genetic broadly speaking, genetic diseases are caused by aberrations in genes or chromosomes. many genetic diseases are developed from before birth. genetic disorders account for a significant number of the health care problems in our society. advances in the understanding of this diseases have increased both the life span and quality of life for many of those affected by genetic disorders. recent developments in bioinformatics and laboratory genetics have made possible the better delineation of certain malformation and mental retardation syndromes, so that their mode of inheritance
amphibians hatch from
[ "trees", "rocks", "the sky", "calcium life pods" ]
Key fact: amphibians hatch from eggs
D
3
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
Why would a lightbulb be dark?
[ "the bulb needs to be charged", "the room is too bright", "the circuit is incomplete", "it ran out of electricity" ]
Key fact: electricity can not flow through an open circuit
C
2
openbookqa
most circuits have devices such as light bulbs that convert electric energy to other forms of energy. in the case of a light bulb, electricity is converted to light and thermal energy.
an electric light, lamp, or light bulb is an electrical device that produces light from electricity. it is the most common form of artificial lighting. lamps usually have a base made of ceramic, metal, glass, or plastic that secures them in the socket of a light fixture, which is also commonly referred to as a'lamp.'the electrical connection to the socket may be made with a screw - thread base, two metal pins, two metal caps or a bayonet mount. the three main categories of electric lights are incandescent lamps, which produce light by a filament heated white - hot by electric current, gas - discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and led lamps, which produce light by a flow of electrons across a band gap in a semiconductor. the energy efficiency of electric lighting has significantly improved since the first demonstrations of arc lamps and incandescent light bulbs in the 19th century. modern electric light sources come in a profusion of types and sizes adapted to many applications. most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. battery - powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles. history before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires
most circuits have devices such as light bulbs that convert electrical energy to other forms of energy. in the case of a light bulb, electrical energy is converted to light and thermal energy.
What color is a stick bug?
[ "brown", "gray", "green", "black" ]
Key fact: An example of camouflage is when something has the same color as its environment
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
When an object moves, how will its kinetic energy be affected?
[ "lower", "reduced", "escalate", "lessen" ]
Key fact: as an object moves , the kinetic energy of that object will increase
C
2
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
structured query language ( sql ) ( pronounced s - q - l ; or alternatively as " sequel " ) is a domain - specific language used to manage data, especially in a relational database management system ( rdbms ). it is particularly useful in handling structured data, i. e., data incorporating relations among entities and variables. introduced in the 1970s, sql offered two main advantages over older readwrite apis such as isam or vsam. firstly, it introduced the concept of accessing many records with one single command. secondly, it eliminates the need to specify how to reach a record, i. e., with or without an index. originally based upon relational algebra and tuple relational calculus, sql consists of many types of statements, which may be informally classed as sublanguages, commonly : data query language ( dql ), data definition language ( ddl ), data control language ( dcl ), and data manipulation language ( dml ). the scope of sql includes data query, data manipulation ( insert, update, and delete ), data definition ( schema creation and modification ), and data access control. although sql is essentially a declarative language ( 4gl ), it also includes procedural elements. sql was one of the first commercial languages to use edgar f. codd's relational model. the model was described in his influential 1970 paper, " a relational model of data for large shared data banks ". despite not entirely ad
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
Which is likely considered soft?
[ "taffy", "steel", "diamond", "hard pretzels" ]
Key fact: if a mineral can be scratched by a fingernail then that mineral is soft
A
0
openbookqa
arangodb is a graph database system developed by arangodb inc. arangodb is a multi - model database system since it supports three data models ( graphs, json documents, key / value ) with one database core and a unified query language aql ( arangodb query language ). aql is mainly a declarative language and allows the combination of different data access patterns in a single query. arangodb is a nosql database system but aql is similar in many ways to sql, it uses rocksdb as a storage engine. history arangodb gmbh was founded in 2014 by claudius weinberger and frank celler. they originally called the database system a versatile object container ", or avoc for short, leading them to call the database avocadodb. later, they changed the name to arangodb. the word " arango " refers to a little - known avocado variety grown in cuba. in january 2017 arangodb raised a seed round investment of 4. 2 million euros led by target partners. in march 2019 arangodb raised 10 million dollars in series a funding led by bow capital. in october 2021 arangodb raised 27. 8 million dollars in series b funding led by iris capital. release history features json : arangodb uses json as a default storage format, but internally it uses arangodb velocypack a fast and compact binary format for serialization and storage. arango
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
To warm yourself up on a chilly day
[ "rub your palms together", "wear short sleeve shirts", "go without any socks", "stand still in one place" ]
Key fact: friction causes the temperature of an object to increase
A
0
openbookqa
# # ot \ kappa ^ { 2 } } } = = references = =
dermatoglyphics ( from ancient greek derma, " skin ", and glyph, " carving " ) is the scientific study of fingerprints, lines, mounts and shapes of hands, as distinct from the superficially similar pseudoscience of palmistry. dermatoglyphics also refers to the making of naturally occurring ridges on certain body parts, namely palms, fingers, soles, and toes. these are areas where hair usually does not grow, and these ridges allow for increased leverage when picking up objects or walking barefoot. in a 2009 report, the scientific basis underlying dermatoglyphics was questioned by the national academy of sciences, for the discipline's reliance on subjective comparisons instead of conclusions drawn from the scientific method. history 1823 marks the beginning of the scientific study of papillary ridges of the hands and feet, with the work of jan evangelista purkyn. by 1858, sir william herschel, 2nd baronet, while in india, became the first european to realize the value of fingerprints for identification. sir francis galton conducted extensive research on the importance of skin - ridge patterns, demonstrating their permanence and advancing the science of fingerprint identification with his 1892 book fingerprints. in 1893, sir edward henry published the book the classification and uses of fingerprints, which marked the beginning of the modern era of fingerprint identification and is the basis for other classification systems. in 1929, harold cummins and charles midlo m. d.,
clothing physiology is a branch of science that studies the interaction between clothing and the human body, with a particular focus on how clothing affects the physiological and psychological responses of individuals to different environmental conditions. the goal of clothing physiology research is to develop a better understanding of how clothing can be designed to optimize comfort, performance, and protection for individuals in various settings, including outdoor recreation, occupational environments, and medical contexts. purpose of clothing human clothing motives are frequently oversimplified in cultural and sociological theories, with the assumption that they are solely motivated by modesty, adornment, protection, or sex. however, clothing is primarily motivated by the environment, with its form being influenced by human characteristics and traits, as well as physical and social factors such as sex relations, costume, caste, class, and religion. ultimately, clothing must be comfortable in various environmental conditions to support physiological behavior. the concept of clothing has been aptly characterized as a quasi - physiological system that interacts with the human body. quasi - physiological systems clothing can be considered as a quasi - physiological system that interacts with the body in different ways, just like the distinct physiological systems of the human body, such as digestive system and nervous system, which can be analyzed systematically. purpose of clothing physiology the acceptance and perceived comfort of a garment cannot be attributed solely to its thermal properties. rather, the sensation of comfort when wearing a garment is associated with various factors, including the fit of the garment, its moisture buffering
A simple pulley example could be
[ "Going running on a treadmill", "Swimming lap in a pool", "Riding a bike outside", "pulling water from a well" ]
Key fact: a pulley is used to lift a flag on a flagpole
D
3
openbookqa
a treadmill is a device generally used for walking, running, or climbing while staying in the same place. treadmills were introduced before the development of powered machines to harness the power of animals or humans to do work, often a type of mill operated by a person or animal treading the steps of a treadwheel to grind grain. in later times, treadmills were used as punishment devices for people sentenced to hard labour in prisons. the terms treadmill and treadwheel were used interchangeably for the power and punishment mechanisms. more recently, treadmills have instead been used as exercise machines for running or walking in one place. rather than the user powering a mill, the device provides a moving platform with a wide conveyor belt driven by an electric motor or a flywheel. the belt moves to the rear, requiring the user to walk or run at a speed matching the belt. the rate at which the belt moves is the rate of walking or running. thus, the speed of running may be controlled and measured. the more expensive, heavy - duty versions are motor - driven ( usually by an electric motor ). the simpler, lighter, and less expensive versions passively resist the motion, moving only when walkers push the belt with their feet. the latter are known as manual treadmills. treadmills continue to be the biggest - selling exercise equipment category by a large margin. as a result, the treadmill industry has hundreds of manufacturers throughout the world. history william st
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
swimming : this is a single 200 meter freestyle swim. a time of 2 minutes 30 seconds scores 250 points, with faster times scoring more and slower times less. riding : athletes attempt a show - jumping course with 12 obstacles.
What is something that could be made from iron?
[ "Cats", "RVs", "Wood", "Feelings" ]
Key fact: iron nails are made of iron
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a cost database is a computerized database of cost estimating information, which is normally used with construction estimating software to support the formation of cost estimates. a cost database may also simply be an electronic reference of cost data. overview a cost database includes the electronic equivalent of a cost book, or cost reference book, a tool used by estimators for many years. cost books may be internal records at a particular company or agency, or they may be commercially published books on the open market. aec teams and federal agencies can and often do collect internally sourced data from their own specialists, vendors, and partners. this is valuable personalized cost data that is captured but often doesn't cover the same range that commercial cost book data can. internally sourced data is difficult to maintain and do not have the same level of developed user interface or functionalities as a commercial product. the cost database may be stored in relational database management system, which may be in either an open or proprietary format, serving the data to the cost estimating software. the cost database may be hosted in the cloud. estimators use a cost database to store data in structured way which is easy to manage and retrieve. details costing data the most basic element of a cost estimate and therefore the cost database is the estimate line item or work item. an example is " concrete, 4000 psi ( 30 mpa ), " which is the description of the item. in the cost database, an item is a row or record in
an uncertain database is a kind of database studied in database theory. the goal of uncertain databases is to manage information on which there is some uncertainty. uncertain databases make it possible to explicitly represent and manage uncertainty on the data, usually in a succinct way. formal definition at the basis of uncertain databases is the notion of possible world. specifically, a possible world of an uncertain database is a ( certain ) database which is one of the possible realizations of the uncertain database. a given uncertain database typically has more than one, and potentially infinitely many, possible worlds. a formalism to represent uncertain databases then explains how to succinctly represent a set of possible worlds into one uncertain database. types of uncertain databases uncertain database models differ in how they represent and quantify these possible worlds : incomplete databases are a compact representation of the set of possible worlds the use of null in sql, arguably the most commonplace instantiation of uncertain databases, is an example of incomplete database model. probabilistic databases are a compact representation of a probability distribution over the set of possible worlds. fuzzy databases are a compact representation of a fuzzy set of the possible worlds. though mostly studied in the relational setting, uncertain database models can also be defined in other relational models such as graph databases or xml databases. incomplete database the most common database model is the relational model. multiple incomplete database models have been defined over the relational model, that form extensions to the relational algebra. these have been called imieliskilipski
The more hawks that chow down on voles, the
[ "the happier voles in an area", "the bigger voles in the area", "fewer voles in that area", "more voles in the area" ]
Key fact: if a new predator begins eating prey then the population of that prey will decrease
C
2
openbookqa
a vector database, vector store or vector search engine is a database that uses the vector space model to store vectors ( fixed - length lists of numbers ) along with other data items. vector databases typically implement one or more approximate nearest neighbor algorithms, so that one can search the database with a query vector to retrieve the closest matching database records. vectors are mathematical representations of data in a high - dimensional space. in this space, each dimension corresponds to a feature of the data, with the number of dimensions ranging from a few hundred to tens of thousands, depending on the complexity of the data being represented. a vector's position in this space represents its characteristics. words, phrases, or entire documents, as well as images, audio, and other types of data, can all be vectorized. these feature vectors may be computed from the raw data using machine learning methods such as feature extraction algorithms, word embeddings or deep learning networks. the goal is that semantically similar data items receive feature vectors close to each other. vector databases can be used for similarity search, semantic search, multi - modal search, recommendations engines, large language models ( llms ), object detection, etc. vector databases are also often used to implement retrieval - augmented generation ( rag ), a method to improve domain - specific responses of large language models. the retrieval component of a rag can be any search system, but is most often implemented as a vector database. text documents describing the domain of interest are collected,
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
the context of count data.
Snowfall takes place during the winter in what area?
[ "Arctic", "Atlantic", "Rain forest", "Tropics" ]
Key fact: snow falls during the winter in the arctic environment
A
0
openbookqa
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
Some birds find locations with
[ "landmarks", "road signs", "eggs", "magnetic patterns" ]
Key fact: Earth 's magnetic patterns are used for finding locations by animals that migrate
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the web - based map collection includes :
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Single cell organisms can put an animal in the
[ "funny farm", "space program", "emergency room", "hall of fame" ]
Key fact: bacteria can cause people to become ill
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the hospital records database is a database provided by the wellcome trust and uk national archives which provides information on the existence and location of the records of uk hospitals. this includes the location and dates of administrative and clinical records, the existence of catalogues, and links to some online hospital catalogues. the website was proposed as a resource of the month by the royal society of medicine in 2009 references external links hospital records database smart clinics
mumps ( " massachusetts general hospital utility multi - programming system " ), or m, is an imperative, high - level programming language with an integrated transaction processing keyvalue database. it was originally developed at massachusetts general hospital for managing patient medical records and hospital laboratory information systems. mumps technology has since expanded as the predominant database for health information systems and electronic health records in the united states. mumps - based information systems, such as epic systems ', provide health information services for over 78 % of patients across the u. s. a unique feature of the mumps technology is its integrated database language, allowing direct, high - speed read - write access to permanent disk storage. history 1960s - 1970s - genesis mumps was developed by neil pappalardo, robert a. greenes, and curt marble in dr. octo barnett's lab at the massachusetts general hospital ( mgh ) in boston during 1966 and 1967. it grew out of frustration, during a national institutes of health ( nih ) supported hospital information systems project at the mgh, with the development in assembly language on a time - shared pdp - 1 by primary contractor bolt, beranek & newman ( bbn ). mumps came out of an internal " skunkworks " project at mgh by pappalardo, greenes, and marble to create an alternative development environment. as a result of initial demonstration of capabilities, dr. barnett's proposal to nih in 1967 for renewal
If a bird is moving through the sky, someone wanting to know the speed would
[ "speedily enjoy viewing", "observe quickness", "watch slowly", "look away" ]
Key fact: speed is a measure of how fast an object is moving
B
1
openbookqa
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
Most of the pollutants in the air and in our world are there due to
[ "wolves attacking deer", "dolphins under water", "man made reasons", "naturally occurring reasons" ]
Key fact: humans cause pollution
C
2
openbookqa
observational learning explains how wolves know how to hunt as a group.
the ecology of fear is a conceptual framework describing the psychological impact that predator - induced stress experienced by animals has on populations and ecosystems. within ecology, the impact of predators has been traditionally viewed as limited to the animals that they directly kill, while the ecology of fear advances evidence that predators may have a far more substantial impact on the individuals that they predate, reducing fecundity, survival and population sizes. to avoid being killed, animals that are preyed upon will employ anti - predator defenses which aid survival but may carry substantial costs. history the concept was coined in the 1999 paper " the ecology of fear : optimal foraging, game theory, and trophic interactions ", which argued that " a predator [... ] depletes a food patch [... ] by frightening prey rather than by actually killing prey. " in the 2000s, the ecology of fear gained attention after researchers identified an impact of the reintroduction of wolves into yellowstone on the regrowth of aspen and willows because of a substantial reduction in the numbers of elk in the park through killing. some studies also indicated that the wolves affected the grazing intensity and patterns of the elk because they felt less secure when feeding. critics have put forward alternative explanations for the regrowth, other than the wolf reintroduction. the consideration of wolves as a charismatic species and the fame of yellowstone led to widespread media attention of the concept, including a mention in the new york times and a fold - out illustration of
to analyze the population fluctuations of a species like wolves over time, we can use time series analysis, which involves examining a sequence of data points collected at regular intervals. this method can help us identify patterns, trends, and correlations between different factors affecting the population. here's a step - by - step guide on how to perform a time series analysis : 1. data collection : gather historical data on the wolf population, including the number of individuals at different time intervals ( e. g., monthly or yearly ). also, collect data on factors that may influence the population, such as prey availability, climate change indicators ( e. g., temperature, precipitation ), and hunting regulations. 2. data preprocessing : clean and preprocess the data to ensure it is accurate and suitable for analysis. this may involve removing outliers, filling in missing values, and converting data to a common format. 3. exploratory data analysis : visualize the data to gain insights into the population trends and potential relationships between variables. plot the wolf population over time and look for patterns, such as seasonality or long - term trends. also, create scatter plots or correlation matrices to examine the relationships between the wolf population and the factors of interest. 4. model selection : choose an appropriate time series model to analyze the data. some common models include autoregressive integrated moving average ( arima ), seasonal decomposition of time series ( stl ),
which one of these would be a migratory outcome?
[ "animals feeling more comfortable being outside", "animals getting more sick", "animals getting more aggressive", "animals dying from sub-zero weather" ]
Key fact: migration is when animals move themselves from a cooler climate to a warmer climate for the winter
A
0
openbookqa
animals may eat and drink at certain times of day as well. humans have daily cycles of behavior, too. most people start to get sleepy after dark and have a hard time sleeping when it is light outside. daily cycles of behavior are called circadian rhythms.
aggression in cattle is usually a result of fear, learning, and hormonal state, however, many other factors can contribute to aggressive behaviors in cattle. despite the fact that bulls ( uncastrated male cattle ) are generally significantly more aggressive than cows, there are far more reported cases of cows attacking humans than bulls, and the majority of farm - related injuries and fatalities by cattle is caused by cows. this is most likely due to the fact that there are far more female cattle on a farm than bulls, so statistically the likelihood of injury or death from cattle is more likely to be caused by cows. however, this is also exacerbated by the fact that many people are unaware of the potential for aggression in cows, especially during, and immediately after, calving ( giving birth ) and when cows feel threatened or are seeking to protect their young. temperament traits temperament traits are known to be traits in which explain the behavior and actions of an animal and can be described in the traits responsible for how easily an animal can be approached, handled, milked, or trained. temperament can also be defined as how an animal carries out maternal or other behaviors while subjected to routine management. these traits have the ability to change as the animal ages or as the environment in which the animal lives changes over time, however, it is proven that regardless of age and environmental conditions, some individuals remain more aggressive than others. aggression in cattle can arise from both genetic and environmental factors. aggression between cows is worse
identification and monitoring of animal reservoirs can play a crucial role in predicting and controlling viral disease outbreaks in humans. animal reservoirs are species that can harbor infectious agents, such as viruses, without showing signs of illness. these reservoirs can transmit the virus to other species, including humans, leading to disease outbreaks. by understanding and monitoring these animal reservoirs, we can take several steps to prevent and control viral outbreaks : 1. early warning systems : identifying and monitoring animal reservoirs can help establish early warning systems for potential viral outbreaks. by tracking changes in animal populations and their health, we can detect unusual patterns that may indicate the emergence of a new virus or the re - emergence of a known virus. this information can then be used to alert public health officials and initiate preventive measures. 2. preventing spillover events : a spillover event occurs when a virus jumps from an animal reservoir to humans. by understanding the ecology and behavior of animal reservoirs, we can identify potential spillover risks and implement strategies to reduce human - animal contact. this may include measures such as regulating wildlife trade, improving biosecurity in farms, and promoting safe food handling practices. 3. vaccine development : identifying the animal reservoirs of a virus can help guide the development of vaccines. by studying the virus in its natural host, researchers can gain insights into its biology, transmission, and evolution, which can inform the design of effective vaccines for humans. 4. surveillance and control programs : monitoring animal reservoirs can help inform targeted surveillance and
A 40 foot wide hole is found in the dessert. It's shaped like a ball had hit it. What could have happened?
[ "something from space entered the atmosphere", "a lizard built the hole", "the moon's gravity pulled sand out of the hole", "a dust storm filled it in" ]
Key fact: usually craters on planets are formed by asteroids impacting that planet or moon 's surface
A
0
openbookqa
the moon has a crust, mantle, and core.
meteorites provide clues about our solar system. many were formed in the early solar system ( figure below ). some are from asteroids that have split apart. a few are rocks from nearby bodies like mars. for this to happen, an asteroid smashed into mars and sent up debris. a bit of the debris entered earth ’ s atmosphere as a meteor.
an atmosphere is the gases that surround a planet. the early earth had no atmosphere. conditions were so hot that gases were not stable.
A person is lost in a dense forest, and needs to find their home. They know their home is to the south, and they are headed north. They can find home by using a
[ "northern-directing device", "northern light reader", "northeastern winds", "north central credit" ]
Key fact: a compass is a kind of tool for determining direction by pointing north
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
If a tree in your backyard rots away for is knocked over by the wind, you can just
[ "plant another", "Draw one", "climb one", "do nothing" ]
Key fact: a tree can be replaced by planting a new tree
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
infinitegraph is a distributed graph database implemented in java and c + + and is from a class of nosql ( " not only sql " ) database technologies that focus on graph data structures. developers use infinitegraph to find useful and often hidden relationships in highly connected, complex big data sets. infinitegraph is cross - platform, scalable, cloud - enabled, and is designed to handle very high throughput. infinitegraph can easily and efficiently perform queries that are difficult to perform, such as finding all paths or the shortest path between two items. infinitegraph is suited for applications and services that solve graph problems in operational environments. infinitegraphs " do " query language enables both value - based queries and complex graph queries. infinitegraph goes beyond graph databases to also support complex object queries. adoption is seen in federal government, telecommunications, healthcare, cybersecurity, manufacturing, finance, and networking applications. history infinitegraph is produced and supported by objectivity, inc., a company that develops database management technologies for large - scale, distributed data management and relationship analytics. the new infinitegraph was released in may 2021. features api / protocols : java, core c + +, rest api graph model : labeled directed multigraph. an edge is a first - class entity with an identity independent of the vertices it connects.. concurrency : update locking on subgraphs, concurrent non - blocking ingest. consistency : flexible ( from acid to relaxed ). distribution : lock server and 64
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
When a lake receives too much rain it will
[ "swell beyond it's banks", "reverse direction of flow", "lower it's water level", "dry up all together" ]
Key fact: when a body of water receives more water than it can hold , a flood occurs
A
0
openbookqa
the atmosphere is an exchange pool for water. ice masses, aquifers, and the deep ocean are water reservoirs.
streamflow, or channel runoff, is the flow of water in streams and other channels, and is a major element of the water cycle. it is one runoff component, the movement of water from the land to waterbodies, the other component being surface runoff. water flowing in channels comes from surface runoff from adjacent hillslopes, from groundwater flow out of the ground, and from water discharged from pipes. the discharge of water flowing in a channel is measured using stream gauges or can be estimated by the manning equation. the record of flow over time is called a hydrograph. flooding occurs when the volume of water exceeds the capacity of the channel. role in the water cycle streams play a critical role in the hydrologic cycle that is essential for all life on earth. a diversity of biological species, from unicellular organisms to vertebrates, depend on flowing - water systems for their habitat and food resources. rivers are major aquatic landscapes for all manners of plants and animals. rivers even help keep the aquifers underground full of water by discharging water downward through their streambeds. in addition to that, the oceans stay full of water because rivers and runoff continually refreshes them. streamflow is the main mechanism by which water moves from the land to the oceans or to basins of interior drainage. sources stream discharge is derived from four sources : channel precipitation, overland flow, interflow, and groundwater. channel precipitation is the moisture falling directly on the water surface, and
water level, also known as gauge height or stage, is the elevation of the free surface of a sea, stream, lake or reservoir relative to a specified vertical datum. over long distances, neglecting external forcings ( such as wind ), water level tends to conform to an equigeopotential surface. see also water level ( device ), device utilizing the surface of liquid water to establish a local horizontal plane of reference flood stage hydraulic head stream gauge water level gauges tide gauge level sensor liquid level reference water level stage ( hydrology ) sea level = = references = =
Adding heat energy to something can cook it, such as heating
[ "ice", "wood", "seashells", "cookie dough" ]
Key fact: cooking food requires adding heat energy
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
arangodb is a graph database system developed by arangodb inc. arangodb is a multi - model database system since it supports three data models ( graphs, json documents, key / value ) with one database core and a unified query language aql ( arangodb query language ). aql is mainly a declarative language and allows the combination of different data access patterns in a single query. arangodb is a nosql database system but aql is similar in many ways to sql, it uses rocksdb as a storage engine. history arangodb gmbh was founded in 2014 by claudius weinberger and frank celler. they originally called the database system a versatile object container ", or avoc for short, leading them to call the database avocadodb. later, they changed the name to arangodb. the word " arango " refers to a little - known avocado variety grown in cuba. in january 2017 arangodb raised a seed round investment of 4. 2 million euros led by target partners. in march 2019 arangodb raised 10 million dollars in series a funding led by bow capital. in october 2021 arangodb raised 27. 8 million dollars in series b funding led by iris capital. release history features json : arangodb uses json as a default storage format, but internally it uses arangodb velocypack a fast and compact binary format for serialization and storage. arango
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
Which feels hotter?
[ "90 degrees Fahrenheit, high humidity", "low temperature, low humidity", "low temperature, high humidity", "90 degrees Fahrenheit, low humidity" ]
Key fact: humidity is the amount of water vapor in the air
A
0
openbookqa
lower thresholds are sometimes appropriate for elderly people. the normal daily temperature variation is typically 0. 5 °c ( 0. 90 °f ), but can be greater among people recovering from a fever. an organism at optimum temperature is considered afebrile, meaning " without fever ". if temperature is raised, but the setpoint is not raised, then the result is hyperthermia.
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
in terms of moisture, climates can be classified as arid ( dry ), semi - arid, humid ( wet ), or semi - humid. the amount of moisture depends on both precipitation and evaporation.
A supermodel with a big nose probably
[ "ate an animal with a big nose", "had a parent also with one", "didn't pay the photographer enough", "ate dog food a lot" ]
Key fact: when both a dominant and recessive gene are present , the dominant trait will be visible
B
1
openbookqa
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
+ > john didn't eat all of the cookies. here, the use of " some " semantically entails that more than one cookie was eaten. it does not entail, but implicates, that not every cookie was eaten, or at least that the speaker does not know whether any cookies are left. the reason for this implicature is that saying " some " when one could say " all " would be less than informative enough in most circumstances.
number sense in animals is the ability of creatures to represent and discriminate quantities of relative sizes by number sense. it has been observed in various species, from fish to primates. animals are believed to have an approximate number system, the same system for number representation demonstrated by humans, which is more precise for smaller quantities and less so for larger values. an exact representation of numbers higher than three has not been attested in wild animals, but can be demonstrated after a period of training in captive animals. in order to distinguish number sense in animals from the symbolic and verbal number system in humans, researchers use the term numerosity, rather than number, to refer to the concept that supports approximate estimation but does not support an exact representation of number quality. number sense in animals includes the recognition and comparison of number quantities. some numerical operations, such as addition, have been demonstrated in many species, including rats and great apes. representing fractions and fraction addition has been observed in chimpanzees. a wide range of species with an approximate number system suggests an early evolutionary origin of this mechanism or multiple convergent evolution events. like humans, chicks have a left - to - right mental number line ( they associate the left space with smaller numbers and the right space with larger numbers ). early studies at the beginning of the 20th century, wilhelm von osten famously, but prematurely, claimed human - like counting abilities in animals on the example of his horse named hans. his claim is widely rejected today,
Who would refrain from eating a salad?
[ "horses", "cows", "wolves", "rabbits" ]
Key fact: carnivores only eat animals
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
bovine metabolome database is a free web database about metabolites information of bovine ( cow ). it collects 7859 metabolites totally. each metabolite host properties like cas name, iupac name, structure diagram, formula, and biofluid location. it fills the lack of the information in bovine field. this project is supported by genome alberta & genome canada, a not - for - profit organization that is leading canada's national genomics strategy with $ 600 million in funding from the federal government. bovine metabolome database's protocol is available via bovine metabolome database website. see also hmdb drugbank
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
Shorter periods of daylight happen
[ "October to December", "November to March", "December to April", "December to March" ]
Key fact: the amount of daylight is least in the winter
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a calendar queue ( cq ) is a priority queue ( queue in which every element has associated priority and the dequeue operation removes the highest priority element ). it is analogous to desk calendar, which is used by humans for ordering future events by date. discrete event simulations require a future event list ( fel ) structure that sorts pending events according to their time. such simulators require a good and efficient data structure as time spent on queue management can be significant. the calendar queue ( with optimum bucket size ) can approach o ( 1 ) average performance. calendar queues are closely related to bucket queues but differ from them in how they are searched and in being dynamically resized. implementation theoretically, like a bucket queue, a calendar queue consists of an array of linked lists. sometimes each index in the array is also referred to as a bucket. the bucket has specified width and its linked list holds events whose timestamp maps to that bucket. a desk calendar has 365 buckets for each day with a width of one day. each array element contains one pointer that is the head of the corresponding linked list. if the array name is " month " then month [ 11 ] is a pointer to the list of events scheduled for the 12th month of the year ( the vector index starts from 0 ). the complete calendar thus consists of an array of 12 pointers and a collection of up to 12 linked lists. in calendar queue, enqueue ( addition in a queue ) and
this dial shows the exact date. the numbering goes to 31, the maximum number of days in a month. in months that have fewer days ( 28, 29 or 30 ), the hand automatically moves forward to the first day of the following month. the months with 31 days are january, march, may, july, august, october and december, the months with 30 days are april, june, september and november. february is the only month with less than 30 days. february has only 28 days ( 29 days in leap years ).
If a snow storm is coming you should?
[ "Go out", "Find pizza", "Buy supplies", "Watch Tv" ]
Key fact: preparing for a storm requires predicting the occurrence of that storm
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
consider a database that records customer orders, where an order is for one or more of the items that the enterprise sells. the database would contain a table identifying customers by a customer number ( primary key ) ; another identifying the products that can be sold by a product number ( primary key ) ; and it would contain a pair of tables describing orders. one of the tables could be called orders and it would have an order number ( primary key ) to identify this order uniquely, and would contain a customer number ( foreign key ) to identify who the products are being sold to, plus other information such as the date and time when the order was placed, how it will be paid for, where it is to be shipped to, and so on. the other table could be called orderitem ; it would be identified by a compound key consisting of both the order number ( foreign key ) and an item line number ; with other non - primary key attributes such as the product number ( foreign key ) that was ordered, the quantity, the price, any discount, any special options, and so on.
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
Which is an inherited characteristic?
[ "hair length", "clothing style", "bone thickness", "language skills" ]
Key fact: the thickness of the parts of an organism is an inherited characteristic
C
2
openbookqa
a vector database, vector store or vector search engine is a database that uses the vector space model to store vectors ( fixed - length lists of numbers ) along with other data items. vector databases typically implement one or more approximate nearest neighbor algorithms, so that one can search the database with a query vector to retrieve the closest matching database records. vectors are mathematical representations of data in a high - dimensional space. in this space, each dimension corresponds to a feature of the data, with the number of dimensions ranging from a few hundred to tens of thousands, depending on the complexity of the data being represented. a vector's position in this space represents its characteristics. words, phrases, or entire documents, as well as images, audio, and other types of data, can all be vectorized. these feature vectors may be computed from the raw data using machine learning methods such as feature extraction algorithms, word embeddings or deep learning networks. the goal is that semantically similar data items receive feature vectors close to each other. vector databases can be used for similarity search, semantic search, multi - modal search, recommendations engines, large language models ( llms ), object detection, etc. vector databases are also often used to implement retrieval - augmented generation ( rag ), a method to improve domain - specific responses of large language models. the retrieval component of a rag can be any search system, but is most often implemented as a vector database. text documents describing the domain of interest are collected,
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
if a person has a scar on the face, at what point did they get it?
[ "after they were born", "at the time of delivery", "at the time of conception", "at the time of fetus development" ]
Key fact: a scar is an acquired characteristic
A
0
openbookqa
human embryonic development or human embryogenesis is the development and formation of the human embryo. it is characterised by the processes of cell division and cellular differentiation of the embryo that occurs during the early stages of development. in biological terms, the development of the human body entails growth from a one - celled zygote to an adult human being. fertilization occurs when the sperm cell successfully enters and fuses with an egg cell ( ovum ). the genetic material of the sperm and egg then combine to form the single cell zygote and the germinal stage of development commences. human embryonic development covers the first eight weeks of development, which have 23 stages, called carnegie stages. at the beginning of the ninth week, the embryo is termed a fetus ( spelled " foetus " in british english ). in comparison to the embryo, the fetus has more recognizable external features and a more complete set of developing organs. human embryology is the study of this development during the first eight weeks after fertilization. the normal period of gestation ( pregnancy ) is about nine months or 40 weeks. the germinal stage refers to the time from fertilization through the development of the early embryo until implantation is completed in the uterus. the germinal stage takes around 10 days. during this stage, the zygote divides in a process called cleavage. a blastocyst is then formed and implants in the uterus
a fetus or foetus ( ; pl. : fetuses, foetuses, rarely feti or foeti ) is the unborn mammalian offspring that develops from an embryo. following the embryonic stage, the fetal stage of development takes place. prenatal development is a continuum, with no clear defining feature distinguishing an embryo from a fetus. however, in general a fetus is characterized by the presence of all the major body organs, though they will not yet be fully developed and functional, and some may not yet be situated in their final anatomical location. in human prenatal development, fetal development begins from the ninth week after fertilization ( which is the eleventh week of gestational age ) and continues until the birth of a newborn. etymology the word fetus ( plural fetuses or rarely, the solecism feti ) comes from latin ftus'offspring, bringing forth, hatching of young '. the latin plural fets is not used in english ; occasionally the plural feti is used in english by analogy with second - declension latin nouns. the predominant british, irish, and commonwealth spelling is foetus, except in medical usage, where fetus is preferred. the - oe - spelling is first attested in 1594 and arose in late latin by analogy with classical latin words like amoenus. non - human animals a fetus is a stage in the prenatal development of viviparous organisms. this stage
development of the human body is the process of growth to maturity. the process begins with fertilization, where an egg released from the ovary of a female is penetrated by a sperm cell from a male. the resulting zygote develops through mitosis and cell differentiation, and the resulting embryo then implants in the uterus, where the embryo continues development through a fetal stage until birth. further growth and development continues after birth, and includes both physical and psychological development that is influenced by genetic, hormonal, environmental and other factors. this continues throughout life : through childhood and adolescence into adulthood. before birth development before birth, or prenatal development ( from latin natalis'relating to birth') is the process in which a zygote, and later an embryo, and then a fetus develops during gestation. prenatal development starts with fertilization and the formation of the zygote, the first stage in embryonic development which continues in fetal development until birth. fertilization fertilization occurs when the sperm successfully enters the ovum's membrane. the chromosomes of the sperm are passed into the egg to form a unique genome. the egg becomes a zygote and the germinal stage of embryonic development begins. the germinal stage refers to the time from fertilization, through the development of the early embryo, up until implantation. the germinal stage is over at about 10 days of ge
Living near any body of water puts your home at risk of flooding, if there is?
[ "too much rainfall", "West Virginia", "Jelly Beans", "Tape" ]
Key fact: storms cause bodies of water to increase amount of water they contain
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
If enough dirt is able to accumulate over a carcass, then eventually that carcass may
[ "be solidified in stone", "be frozen in carbonate", "be melted to a tree", "become a source of water" ]
Key fact: fossils are formed when layers of sediment cover the remains of organisms over time
A
0
openbookqa
the bodies of organisms can make a sedimentary rock. plant bodies are lithified to become coal. when shells are cemented together they make a type of limestone. so limestone can be considered chemical or organic.
( t ) ata = fire ita = rock, stone, metal, y = water, river yby = earth, ground ybytu = air, wind
water that flows over the land from precipitation or melting snow or ice.
If a bear has thick legs fur, then
[ "her mom was very old", "his hair is blonde", "her arms are wide", "his ancestors had the same" ]
Key fact: the thickness of the parts of an organism is an inherited characteristic
D
3
openbookqa
a genealogical dna test is a dna - based genetic test used in genetic genealogy that looks at specific locations of a person's genome in order to find or verify ancestral genealogical relationships, or ( with lower reliability ) to estimate the ethnic mixture of an individual. since different testing companies use different ethnic reference groups and different matching algorithms, ethnicity estimates for an individual vary between tests, sometimes dramatically. three principal types of genealogical dna tests are available, with each looking at a different part of the genome and being useful for different types of genealogical research : autosomal ( atdna ), mitochondrial ( mtdna ), and y - chromosome ( y - dna ). autosomal tests may result in a large number of dna matches to both males and females who have also tested with the same company. each match will typically show an estimated degree of relatedness, i. e., a close family match, 1st - 2nd cousins, 3rd - 4th cousins, etc. the furthest degree of relationship is usually the " 6th - cousin or further " level. however, due to the random nature of which, and how much, dna is inherited by each tested person from their common ancestors, precise relationship conclusions can only be made for close relations. traditional genealogical research, and the sharing of family trees, is typically required for interpretation of the results. autosomal tests are also used in estimating ethnic mix. mtdna and y - dna tests are much more objective. however, they give considerably fewer dna matches
the molecular ancestry network ( manet ) database is a bioinformatics database that maps evolutionary relationships of protein architectures directly onto biological networks. it was originally developed by hee shin kim, jay e. mittenthal and gustavo caetano - anolles in the department of crop sciences of the university of illinois at urbana - champaign. manet traces for example the ancestry of individual metabolic enzymes in metabolism with bioinformatic, phylogenetic, and statistical methods. manet currently links information in the structural classification of proteins ( scop ) database, the metabolic pathways database of the kyoto encyclopedia of genes and genomes ( kegg ), and phylogenetic reconstructions describing the evolution of protein fold architecture at a universal level. the database has been updated to reflect evolution of metabolism at the level of protein fold families. manet literally " paints " the ancestries of enzymes derived from rooted phylogenetic trees directly onto over one hundred metabolic pathways representations, paying homage to one of the fathers of impressionism. it also provides numerous functionalities that enable searching specific protein folds with defined ancestry values, displaying the distribution of enzymes that are painted, and exploring quantitative details describing individual protein folds. this permits the study of global and local metabolic network architectures, and the extraction of evolutionary patterns at global and local levels. a statistical analysis of the data in manet showed for example a patchy distribution of ancestry values assigned to protein folds in each subnetwork, indicating that evolution of metabolism occurred globally by widespread recruitment of enzymes
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
Which animal will eat only plants?
[ "snake", "fish", "slug", "deer" ]
Key fact: herbivores only eat plants
D
3
openbookqa
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
if a spoon was placed outside under our closest star, what could happen to it?
[ "it would shrink smaller", "it would feel warmer to touch", "it would freeze over", "it would become a gas" ]
Key fact: absorbing sunlight causes objects to heat
B
1
openbookqa
some molecules are too small to condense. they remain as a gas which can be burned as fuel.
if the temperature of a gas sample is decreased, the volume decreases as well.
gases assume the shape of their container.
If an animal gets too cold for its body to perform its functions, that animal will
[ "perish due coldness", "laugh to death", "do math", "float" ]
Key fact: if a living thing becomes too cold then that living thing will die
A
0
openbookqa
q is a programming language for array processing, developed by arthur whitney. it is proprietary software, commercialized by kx systems. q serves as the query language for kdb +, a disk based and in - memory, column - based database. kdb + is based on the language k, a terse variant of the language apl. q is a thin wrapper around k, providing a more readable, english - like interface. one of the use cases is financial time series analysis, as one could do inexact time matches. an example is to match the a bid and the ask before that. both timestamps slightly differ and are matched anyway. overview the fundamental building blocks of q are atoms, lists, and functions. atoms are scalars and include the data types numeric, character, date, and time. lists are ordered collections of atoms ( or other lists ) upon which the higher level data structures dictionaries and tables are internally constructed. a dictionary is a map of a list of keys to a list of values. a table is a transposed dictionary of symbol keys and equal length lists ( columns ) as values. a keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. the following code demonstrates the relationships of the data structures. expressions to evaluate appear prefixed with the q ) prompt, with the output of the evaluation shown beneath : these entities are manipulated
the flat ( or table ) model consists of a single, two - dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. for instance, columns for name and password that might be used as a part of a system security database. each row would have the specific password associated with an individual user. columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. this tabular format is a precursor to the relational model.
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
An iron horseshoe with red stripes is held over a paperclip and
[ "melts it", "drops it", "burns it", "yanks it" ]
Key fact: a magnet attracts magnetic metals through magnetism
D
3
openbookqa
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
Cars driving over a stone road throughout a year, including hot days and cool nights, may cause the road to
[ "evaporate", "burn", "flood", "break up" ]
Key fact: mechanical weathering is when rocks are broken down by mechanical means
D
3
openbookqa
a cost database is a computerized database of cost estimating information, which is normally used with construction estimating software to support the formation of cost estimates. a cost database may also simply be an electronic reference of cost data. overview a cost database includes the electronic equivalent of a cost book, or cost reference book, a tool used by estimators for many years. cost books may be internal records at a particular company or agency, or they may be commercially published books on the open market. aec teams and federal agencies can and often do collect internally sourced data from their own specialists, vendors, and partners. this is valuable personalized cost data that is captured but often doesn't cover the same range that commercial cost book data can. internally sourced data is difficult to maintain and do not have the same level of developed user interface or functionalities as a commercial product. the cost database may be stored in relational database management system, which may be in either an open or proprietary format, serving the data to the cost estimating software. the cost database may be hosted in the cloud. estimators use a cost database to store data in structured way which is easy to manage and retrieve. details costing data the most basic element of a cost estimate and therefore the cost database is the estimate line item or work item. an example is " concrete, 4000 psi ( 30 mpa ), " which is the description of the item. in the cost database, an item is a row or record in
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
in nuclear data evaluation and validation, a library and a database serve different purposes, but both are essential for accurate predictions in theoretical nuclear reactor models. a nuclear data library is a collection of evaluated nuclear data files that contain information about various nuclear reactions, decay processes, and other relevant properties of atomic nuclei. these libraries are created through a rigorous evaluation process that combines experimental data, theoretical models, and statistical methods to provide the best possible representation of nuclear properties. some well - known nuclear data libraries include endf ( evaluated nuclear data file ), jeff ( joint evaluated fission and fusion ), and jendl ( japanese evaluated nuclear data library ). on the other hand, a nuclear database is a structured and organized collection of raw experimental and theoretical data related to nuclear reactions and properties. these databases store information from various sources, such as experimental measurements, theoretical calculations, and simulations. they serve as a primary source of information for nuclear data evaluators when creating nuclear data libraries. examples of nuclear databases include exfor ( experimental nuclear reaction data ), cinda ( computer index of nuclear data ), and ensdf ( evaluated nuclear structure data file ). the choice between a library and a database affects the accuracy of nuclear data predictions in a theoretical nuclear reactor model in several ways : 1. quality of data : nuclear data libraries contain evaluated data, which means they have undergone a thorough evaluation process to ensure their accuracy and reliability. in contrast, databases contain raw data that may not have been evaluated or validated.
A lack of water has a direct connection on the amount of available
[ "shelters", "sustenance", "rainy days", "mates" ]
Key fact: as available water in an environment decreases , the amount of available food in that environment will decrease
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
climate shelters are a place of refuge for populations that seek shelter from extreme climate events exacerbated by the effects of urban heat islands. they include cooling centers, but also encompass parks and other outdoor spaces designed to provide a harbor for cooler temperatures than surrounding areas. several cities have designed programs to implement urban climate shelters. purpose climate shelters are essential urban facilities aimed at facilitating adaptation to extreme weather events, particularly heat waves, which have been increasingly linked to elevated mortality rates. these shelters, whether situated indoors or outdoors, are designed to maintain a safe and comfortable temperature for individuals vulnerable to extreme weather conditions. benefits climate shelters offer crucial protection for communities vulnerable to climate - related disasters like floods, storms, and extreme temperatures. these designs not only reduce the risk of damage but also prove cost - effective in the long run by preventing losses. for economically disadvantaged communities, access to affordable resilient housing is particularly vital, as it helps safeguard their homes and livelihoods. innovative solutions, often identified through competitions, have shown that even simple and low - cost design features can significantly enhance the resilience of homes. moreover, both qualitative and quantitative analyses consistently demonstrate that investments in resilient housing yield high benefit - cost ratios across various scenarios. this emphasizes the economic justification for prioritizing such initiatives. they remain a crucial component of broader climate adaptation strategies, offering tangible benefits in terms of risk reduction, cost savings, and community resilience. by investing in resilient housing, we not
a vulnerability database ( vdb ) is a platform aimed at collecting, maintaining, and disseminating information about discovered computer security vulnerabilities. the database will customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. a vdb will assign a unique identifier to each vulnerability cataloged such as a number ( e. g. 123456 ) or alphanumeric designation ( e. g. vdb - 2020 - 12345 ). information in the database can be made available via web pages, exports, or api. a vdb can provide the information for free, for pay, or a combination thereof. history the first vulnerability database was the " repaired security bugs in multics ", published by february 7, 1973 by jerome h. saltzer. he described the list as " a list of all known ways in which a user may break down or circumvent the protection mechanisms of multics ". the list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. the published list contained two local privilege escalation vulnerabilities and three local denial of service attacks. types of vulnerability databases major vulnerability databases such as the iss x - force database, symantec / securityfocus bid database, and the open source vulnerability database ( osvdb ) aggregate a broad range of publicly disclosed vulnerabilities, including common vu
An arid sandy place has very little
[ "sustenance", "sand", "sun", "heat" ]
Key fact: a desert environment contains very little food
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
Which animal might catch it's sustenance faster?
[ "Snail", "Frigate Bird", "turtle", "Sloth" ]
Key fact: if an organism 's prey moves quickly then that organism may need to move quickly to catch its prey
B
1
openbookqa
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
Squirrels eat a variety of foods including
[ "beef", "tender leaf buds", "pork", "cotton candy" ]
Key fact: squirrels eat edible plants
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
mycobank is an online database, documenting new mycological names and combinations, eventually combined with descriptions and illustrations. it is run by the westerdijk fungal biodiversity institute in utrecht. each novelty, after being screened by nomenclatural experts and found in accordance with the icn ( international code of nomenclature for algae, fungi, and plants ), is allocated a unique mycobank number before the new name has been validly published. this number then can be cited by the naming author in the publication where the new name is being introduced. only then, this unique number becomes public in the database. by doing so, this system can help solve the problem of knowing which names have been validly published and in which year. mycobank is linked to other important mycological databases such as index fungorum, life science identifiers, global biodiversity information facility ( gbif ) and other databases. mycobank is one of three nomenclatural repositories recognized by the nomenclature committee for fungi ; the others are index fungorum and fungal names. mycobank has emerged as the primary registration system for new fungal taxa and nomenclatural acts. according to a 2021 analysis of taxonomic innovations in lichen and allied fungi between 20182020, 97. 7 % of newly described taxa and 76. 5 % of new combinations obtained their registration numbers from mycobank, suggesting broad adoption by the mycological community. the system
A bear in the arctic can go a long time without eating
[ "if it is focused", "if it is determined", "if it is working", "if it has excess chub" ]
Key fact: an animal can survive in an environment with little food by storing fat
D
3
openbookqa
database testing usually consists of a layered process, including the user interface ( ui ) layer, the business layer, the data access layer and the database itself. the ui layer deals with the interface design of the database, while the business layer includes databases supporting business strategies. purposes databases, the collection of interconnected files on a server, storing information, may not deal with the same type of data, i. e. databases may be heterogeneous. as a result, many kinds of implementation and integration errors may occur in large database systems, which negatively affect the system's performance, reliability, consistency and security. thus, it is important to test in order to obtain a database system which satisfies the acid properties ( atomicity, consistency, isolation, and durability ) of a database management system. one of the most critical layers is the data access layer, which deals with databases directly during the communication process. database testing mainly takes place at this layer and involves testing strategies such as quality control and quality assurance of the product databases. testing at these different layers is frequently used to maintain the consistency of database systems, most commonly seen in the following examples : data is critical from a business point of view. companies such as google or symantec, who are associated with data storage, need to have a durable and consistent database system. if database operations such as insert, delete, and update are performed without testing the database for consistency first, the company risks a crash of the entire
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
Electricity causes less damage to the Earth's atmosphere than
[ "Gasoline", "Potatoes", "The sun", "Water" ]
Key fact: electricity causes less pollution than gasoline
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
If the ground is fully shaded, and plants there barely grow, likely the reason is
[ "water there is clean", "people there are silly", "oaks there are mighty", "birds there are fat" ]
Key fact: large trees block sunlight from reaching the ground
C
2
openbookqa
the atmosphere is an exchange pool for water. ice masses, aquifers, and the deep ocean are water reservoirs.
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the context of count data.
Which have a positive impact on the environment?
[ "driving gas guzzlers", "canvas grocery sacks", "littering", "unchecked consumerism" ]
Key fact: recycling has a positive impact on the environment
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
What could be used to find out how much carpet is needed for a room?
[ "a tape measure", "a compass", "a barometer", "a beam balance" ]
Key fact: a tape measure is used to measure distance
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
the context of count data.
Naomi looks around outside on a prairie and it is cloudless but the sun is nowhere to be seen. It is
[ "night", "noon", "morning", "afternoon" ]
Key fact: if it is night then the sun has set
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
An example of a conductor might be
[ "carrots", "wood", "a nickel", "magic" ]
Key fact: An electrical conductor is a vehicle for the flow of electricity
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the ki database ( or ki db ) is a public domain database of published binding affinities ( ki ) of drugs and chemical compounds for receptors, neurotransmitter transporters, ion channels, and enzymes. the resource is maintained by the university of north carolina at chapel hill and is funded by the nimh psychoactive drug screening program and by a gift from the heffter research institute. as of april 2010, the database had data for 7 449 compounds at 738 different receptors and, as of 27 april 2018, 67 696 ki values. the ki database has data useful for both chemical biology and chemogenetics. external links description search form bindingdb. org - a similar publicly available database
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
Your body goes into starvation mode when you insufficient amounts of
[ "shoes", "fun", "pants", "sustenance" ]
Key fact: lack of food causes starvation
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
If a reptile were taken to the iceland it would
[ "have many babies", "thrive", "build a home", "die" ]
Key fact: an animal usually requires a warm body temperature for survival
D
3
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
Why would philanthropists donate special straws to help poor countries?
[ "they pollute water", "they need straws", "they treat water", "they poison water" ]
Key fact: treating water is used to remove harmful substances before drinking
C
2
openbookqa
people pollute water when they apply excess chemicals to their lawn. they may also dispose of pollutants incorrectly.
toxicological databases are large compilations of data derived from aquatic and environmental toxicity studies. data is aggregated from a large number of individual studies in which toxic effects upon aquatic and terrestrial organisms have been determined for different chemicals. these databases are then used by toxicologists, chemists, regulatory agencies and scientists to investigate and predict the likelihood that an organic or inorganic chemical will cause an adverse effect ( i. e. toxicity ) on exposed organisms. several such databases have been compiled relating specifically to aquatic toxicology. utility these databases are invaluable resources in the field of aquatic toxicology because the likelihood that a chemical will cause toxicity is highly variable across the broad spectrum of contaminants in the environment. this is because the likelihood of adverse effects on an organism is dependent on the concentration of that substance in the target tissues of the organism, the physicochemical properties of that chemical and the duration of exposure to the chemical. tools capable of predicting the toxicity of specific chemicals to particular organisms or groups of organisms are essential to regulators and researchers in the field of toxicology. available databases in aquatic toxicology multiple databases exist and each generally pertains to a single aspect of aquatic toxicology such as pcbs, tissue residues or sediment toxicity. other informational and regulatory databases on toxicology in general are maintained by the u. s. epa, usgs, united states army corps of engineers and the national oceanic and atmospheric administration. in the u. s. there are three major databases pertaining specifically
water pollution happens when contaminants enter water bodies. contaminants are any substances that harm the health of the environment or humans. most contaminants enter the water because of humans. surface water ( river or lake ) can be exposed to and contaminated by acid rain, storm water runoff, pesticide runoff, and industrial waste. this water is cleaned somewhat by exposure to sunlight, aeration, and microorganisms in the water. groundwater ( private wells and some public water supplies ) generally takes longer to become contaminated, but the natural cleaning process also may take much longer. groundwater can be contaminated by disease - producing pathogens, careless disposal of hazardous household chemical - containing products, agricultural chemicals, and leaking underground storage tanks.
Melting of polar icecaps will
[ "lead to more some US states gaining surface area", "lead to more animal species roaming the Earth", "cause the loss of animal habitats", "cause a boom in the polar bear population" ]
Key fact: as the level of water rises , the amount of available land will decrease
C
2
openbookqa
zoogeography is the study of the geographical distribution of animal species across the earth. one of the key factors that zoogeographers use to explain the patterns observed in animal distribution is plate tectonics. plate tectonics describes the movement of the earth's lithospheric plates that has been ongoing for millions of years. these movements have caused continents to drift apart or come together, leading to the isolation or mingling of animal populations, and consequently significant impacts on their distribution. for example, the distribution of marsupials in australia and south america can be explained by the former connection of these continents during the time when marsupials were diversifying.
in the last decade, climate change has significantly impacted the distribution and migration patterns of polar bear species. the primary reason for this change is the loss of sea ice, which serves as a crucial habitat and hunting ground for polar bears. as the arctic warms, sea ice melts earlier in the spring and forms later in the fall, leading to a reduction in the overall extent and thickness of the ice. this has several consequences for polar bears : 1. altered distribution : as sea ice becomes less available, polar bears are forced to move to areas with more stable ice conditions. this has led to changes in their distribution, with some populations moving further north or spending more time on land. in some cases, this has resulted in increased interactions with human populations, as bears search for alternative food sources. 2. changes in migration patterns : the loss of sea ice has also affected the timing and routes of polar bear migrations. bears typically follow the seasonal movement of sea ice to hunt seals, their primary prey. as the ice melts earlier and forms later, bears must adjust their migration patterns to find suitable hunting grounds. this can lead to longer migrations and increased energy expenditure, which can negatively impact their overall health and reproductive success. 3. reduced access to prey : the decline in sea ice also affects the availability of seals, as they rely on the ice for breeding and resting. with fewer seals available, polar bears must travel greater distances to find food, further altering their migration patterns and increasing the risk
environmental factors and historical events play a significant role in the formation and evolution of unique species assemblages in polar ecosystems. some of these factors include temperature, ice cover, nutrient availability, and geological events. 1. temperature : polar ecosystems are characterized by extreme cold temperatures, which have a direct impact on the species that can survive in these environments. the low temperatures limit the metabolic rates and growth of organisms, leading to the evolution of cold - adapted species with unique physiological adaptations. 2. ice cover : the presence of ice in polar ecosystems influences the distribution and abundance of species. sea ice provides a habitat for ice - dependent species such as polar bears, seals, and some fish species. the retreat and advance of ice sheets during glacial and interglacial periods have also shaped the distribution of terrestrial species, leading to the isolation and speciation of unique assemblages. 3. nutrient availability : nutrient availability in polar ecosystems is generally low due to the limited input from terrestrial sources and slow decomposition rates. this leads to a lower diversity and abundance of primary producers, which in turn affects the entire food web. species in these ecosystems have evolved to cope with these low nutrient conditions, often through specialized feeding strategies or symbiotic relationships. 4. geological events : historical geological events, such as the opening and closing of ocean passages, have influenced the connectivity between polar ecosystems and other regions. this has led to the exchange of species and the formation of unique assemblages through processes such as
If an organism dies what happens to that organisms population?
[ "relaxes", "cries", "increases", "subsides" ]
Key fact: if an organism dies then the population of that organism will decrease
D
3
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
in computer science, an inverted index ( also referred to as a postings list, postings file, or inverted file ) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents ( named in contrast to a forward index, which maps from documents to content ). the purpose of an inverted index is to allow fast full - text searches, at a cost of increased processing when a document is added to the database. the inverted file may be the database file itself, rather than its index. it is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines.
diamonds exist because of the existence of
[ "raw carbon", "work force", "plant feeding", "machines" ]
Key fact: if something is a raw material then that something comes directly from a source
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
Which of the following is true?
[ "gut flora can make you more healthy", "bacteria is always bad", "bacteria in your brain helps digest food", "gut bacteria always makes you sick" ]
Key fact: bacteria can help digest food in humans
A
0
openbookqa
the gut microbiota, also known as the gut flora, is the complex community of microorganisms that live in the digestive tracts of humans and other animals. these microorganisms play a crucial role in maintaining the overall health of the host, including digestion, immune system function, and even the production of certain vitamins. in recent years, research has shown that the gut microbiota can also impact gut - brain communication and influence mental health disorders such as anxiety and depression. the gut - brain axis is a bidirectional communication system between the gastrointestinal tract and the central nervous system, which includes the brain. this communication occurs through various pathways, including the vagus nerve, immune system, and the production of neurotransmitters and other signaling molecules by gut bacteria. the composition of gut microbiota can impact gut - brain communication and mental health in several ways : 1. production of neurotransmitters : some gut bacteria can produce neurotransmitters such as serotonin, dopamine, and gamma - aminobutyric acid ( gaba ), which play essential roles in regulating mood, anxiety, and stress response. an imbalance in the gut microbiota may lead to altered levels of these neurotransmitters, contributing to anxiety and depression. 2. inflammation and immune system activation : dysbiosis, or an imbalance in the gut microbiota, can lead
there are billions of bacteria inside the human digestive tract. they help us digest food. they also make vitamins and play other important roles. we use bacteria in many other ways as well. for example, we use them to :.
gut microbiota, also known as gut flora, are the microorganisms that reside in the human gastrointestinal tract. they play a crucial role in maintaining human health by aiding in digestion, producing essential vitamins, and protecting against harmful pathogens. however, an imbalance in gut microbiota can contribute to various diseases, including inflammatory bowel disease ( ibd ), irritable bowel syndrome ( ibs ), and non - alcoholic fatty liver disease ( nafld ). here are some specific mechanisms by which gut microbiota affect human health and contribute to these diseases : 1. inflammation : imbalances in gut microbiota can lead to an overgrowth of harmful bacteria, which can trigger an immune response and cause inflammation in the gut. this inflammation is a key factor in the development of ibd, which includes crohn's disease and ulcerative colitis. in ibs, low - grade inflammation may also contribute to symptoms such as abdominal pain and altered bowel habits. 2. barrier function : the gut microbiota helps maintain the integrity of the intestinal barrier, preventing harmful substances and pathogens from entering the bloodstream. dysbiosis, or an imbalance in gut microbiota, can weaken this barrier, allowing harmful substances to enter the bloodstream and contribute to inflammation and disease progression in ibd, ibs, and nafld. 3. metabolism : gut microbiota play a
Pushing on a pedal is an example of
[ "force", "patching", "practice", "speed" ]
Key fact: pushing on the pedals of a bike cause that bike to move
A
0
openbookqa
a patch is data that is intended to be used to modify an existing software resource such as a program or a file, often to fix bugs and security vulnerabilities. a patch may be created to improve functionality, usability, or performance. a patch is typically provided by a vendor for updating the software that they provide. a patch may be created manually, but commonly it is created via a tool that compares two versions of the resource and generates data that can be used to transform one to the other. typically, a patch needs to be applied to the specific version of the resource it is intended to modify, although there are exceptions. some patching tools can detect the version of the existing resource and apply the appropriate patch, even if it supports multiple versions. as more patches are released, their cumulative size can grow significantly, sometimes exceeding the size of the resource itself. to manage this, the number of supported versions may be limited, or a complete copy of the resource might be provided instead. patching allows for modifying a compiled ( machine language ) program when the source code is unavailable. this demands a thorough understanding of the inner workings of the compiled code, which is challenging without access to the source code. patching also allows for making changes to a program without rebuilding it from source. for small changes, it can be more economical to distribute a patch than to distribute the complete resource. although often intended to fix problems, a poorly designed patch can introduce new problems ( see software regressions
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
Carnivores devour omnivores which eat
[ "flora", "rocks", "crustaceans", "sand" ]
Key fact: omnivores eat plants
A
0
openbookqa
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
A tree can be replaced by planting a new what?
[ "tall bush", "farm", "grass", "ford" ]
Key fact: a tree can be replaced by planting a new tree
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
bovine metabolome database is a free web database about metabolites information of bovine ( cow ). it collects 7859 metabolites totally. each metabolite host properties like cas name, iupac name, structure diagram, formula, and biofluid location. it fills the lack of the information in bovine field. this project is supported by genome alberta & genome canada, a not - for - profit organization that is leading canada's national genomics strategy with $ 600 million in funding from the federal government. bovine metabolome database's protocol is available via bovine metabolome database website. see also hmdb drugbank
An example of combined substances could be
[ "diamonds", "tin", "cake", "water" ]
Key fact: An example of combining two substances is pouring one substance into the other substance
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
An action that may provide a small bit of warmth could be
[ "looking out of a window", "putting the radio on", "looking at a snow storm", "smashing hands together repeatedly" ]
Key fact: friction occurs when two object 's surfaces move against each other
D
3
openbookqa
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
weather stations collect data on land and sea. weather balloons, satellites, and radar collect data in the atmosphere.
What loosens soil?
[ "gopher homes", "wind", "heat", "bird nests" ]
Key fact: tunnels in soil loosen that soil
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
cameras can be used to study bird to monitor nest and record information about nest survival, nesting behaviors, or even to catch nest predators in the act. the timing of breeding in relation to weather variables can be studied, as well as the size of eggs and chicks in relation to food quality and abundance. records of habitat variables at each nest provide helpful information on the birds ’ nest site selection criteria, and maps of all nests found in a study area allow for examination of how territories are distributed through the habitat.
the eggnog database is a database of biological information hosted by the embl. it is based on the original idea of cogs ( clusters of orthologous groups ) and expands that idea to non - supervised orthologous groups constructed from numerous organisms. the database was created in 2007 and updated to version 4. 5 in 2015. eggnog stands for evolutionary genealogy of genes : non - supervised orthologous groups. references external links http : / / eggnogdb. embl. de
A camera can take an image and
[ "preserve it", "edit it", "lock it in", "produce it" ]
Key fact: a camera is used for recording images
A
0
openbookqa
database design is the organization of data according to a database model. the designer determines what data must be stored and how the data elements interrelate. with this information, they can begin to fit the data to the database model. a database management system manages the data accordingly. database design is a process that consists of several steps. conceptual data modeling the first step of database design involves classifying data and identifying interrelationships. the theoretical representation of data is called an ontology or a conceptual data model. determining data to be stored in a majority of cases, the person designing a database is a person with expertise in database design, rather than expertise in the domain from which the data to be stored is drawn e. g. financial information, biological information etc. therefore, the data to be stored in a particular database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of the meaning of the data to be stored within the system. this process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. this is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. data to be stored can be determined by requirement specification. determining data relationships once a database designer is aware of the data which is to be
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
The moon's surface
[ "features of variety of landscape features", "is covered completely in water", "is 100% flat and smooth", "is 100% covered in asteroid created craters" ]
Key fact: the moon 's surface contains highlands
A
0
openbookqa
the landscape of the moon - its surface features - is very different from earth. the lunar landscape is covered by craters caused by asteroid impacts ( figure below ). the craters are bowl - shaped basins on the moon ’ s surface. because the moon has no water, wind, or weather, the craters remain unchanged.
. it includes land, water, and even the atmosphere to a certain extent.
topographic maps are flat maps that show the three - dimensional surface features of an area. topographic maps help users see the how the land changes in elevation.
In a drought, a resource that will be lacking is
[ "soil", "land", "droplets", "cheese" ]
Key fact: drought means available water decreases in an environment
C
2
openbookqa
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
A lizard that passed away centuries ago may be viewed most easily today in some ways through
[ "globes", "mirrors", "telescopes", "sediment" ]
Key fact: An example of a fossil is a footprint in a rock
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
the web - based map collection includes :
A blue colored item will only reflect
[ "all colors in the spectrum", "only the color white", "a combination of colors", "an exact matching color" ]
Key fact: if an object is blue then that object reflects only blue light
D
3
openbookqa
in computing, indexed color is a technique to manage digital images'colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. it is a form of vector quantization compression. when an image is encoded in this way, color information is not directly carried by the image pixel data, but is stored in a separate piece of data called a color lookup table ( clut ) or palette : an array of color specifications. every element in the array represents a color, indexed by its position within the array. each image pixel does not contain the full specification of its color, but only its index into the palette. this technique is sometimes referred as pseudocolor or indirect color, as colors are addressed indirectly. history early graphics display systems that used 8 - bit indexed color with frame buffers and color lookup tables include shoup's superpaint ( 1973 ) and the video frame buffer described in 1975 by kajiya, sutherland, and cheadle. these supported a palette of 256 rgb colors. superpaint used a shift - register frame buffer, while the kajiya et al. system used a random - access frame buffer. a few earlier systems used 3 - bit color, but typically treated the bits as independent red, green, and blue on / off bits rather than jointly as an index into a clut. palette size the palette itself stores a limited number of distinct colors ; 4, 16 or
in graph theory, an exact coloring is a ( proper ) vertex coloring in which every pair of colors appears on exactly one pair of adjacent vertices. that is, it is a partition of the vertices of the graph into disjoint independent sets such that, for each pair of distinct independent sets in the partition, there is exactly one edge with endpoints in each set. complete graphs, detachments, and euler tours every n - vertex complete graph kn has an exact coloring with n colors, obtained by giving each vertex a distinct color. every graph with an n - color exact coloring may be obtained as a detachment of a complete graph, a graph obtained from the complete graph by splitting each vertex into an independent set and reconnecting each edge incident to the vertex to exactly one of the members of the corresponding independent set. when k is an odd number, a path or cycle with ( k 2 ) { \ displaystyle { \ tbinom { k } { 2 } } } edges has an exact coloring, obtained by forming an exact coloring of the complete graph kk and then finding an euler tour of this complete graph. for instance, a path with three edges has a complete 3 - coloring. related types of coloring exact colorings are closely related to harmonious colorings ( colorings in which each pair of colors appears at most once ) and complete colorings ( colorings in which each pair of colors appears at least once ). clearly, an exact coloring is a coloring that
a colorcolor diagram is a means of comparing the colors of an astronomical object at different wavelengths. astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. the difference in brightness between two bands is referred to as an object's color index, or simply color. on colorcolor diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and the color defined by another brightness difference will be plotted on the vertical axis. background although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black - body radiation curve, also referred to sometimes as a thermal radiation curve. the overall shape of a black - body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as wien's displacement law. thus, observation of a stellar spectrum allows determination of its effective temperature. obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. as such, color - color diagrams can be used as a means of representing the stellar population, much like a hertzsprungrussell diagram, and stars of different spectral classes will inhabit different parts of the diagram. this feature
A load of earth and stone that folds upon itself repeatedly could be considered
[ "a still", "a group", "a range", "a pond" ]
Key fact: mountains are formed by plate tectonics
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
If a screwdriver is bought from a high end store known for high quality, it will most likely end up
[ "falling apart quite fast", "breaking in half immediately", "with rotting wood this week", "passed down from parent to child" ]
Key fact: as the time a tool lasts increases , the number of tools discarded will decrease
D
3
openbookqa
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
If your pet bird is having trouble flying
[ "lock them in their cage more", "give up and get a new bird", "try putting them on a diet", "take away all their perches" ]
Key fact: as the weight of an animal decreases , that animal will fly more easily
C
2
openbookqa
today, some 1, 200 species of birds are threatened with extinction by human actions. humans need to take steps to protect this precious and important natural resource. what can you do to help?.
pellis, s. m. what is " fixed " in a fixed action pattern? a problem of methodology. bird behaviour.
in most species, one or both parents take care of the eggs. they sit on the eggs to keep them warm until they hatch. this is called incubation. after the eggs hatch, the parents generally continue their care. they feed the hatchlings until they are big enough to feed on their own. this is usually at a younger age in ground - nesting birds such as ducks than in tree - nesting birds such as robins.
A large body of salty water drying up is responsible for the creation of the
[ "salt flats", "salt water taffy", "Rocky mountains", "ocean winds" ]
Key fact: An example of a change in the Earth is an ocean becoming a wooded area
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
arangodb is a graph database system developed by arangodb inc. arangodb is a multi - model database system since it supports three data models ( graphs, json documents, key / value ) with one database core and a unified query language aql ( arangodb query language ). aql is mainly a declarative language and allows the combination of different data access patterns in a single query. arangodb is a nosql database system but aql is similar in many ways to sql, it uses rocksdb as a storage engine. history arangodb gmbh was founded in 2014 by claudius weinberger and frank celler. they originally called the database system a versatile object container ", or avoc for short, leading them to call the database avocadodb. later, they changed the name to arangodb. the word " arango " refers to a little - known avocado variety grown in cuba. in january 2017 arangodb raised a seed round investment of 4. 2 million euros led by target partners. in march 2019 arangodb raised 10 million dollars in series a funding led by bow capital. in october 2021 arangodb raised 27. 8 million dollars in series b funding led by iris capital. release history features json : arangodb uses json as a default storage format, but internally it uses arangodb velocypack a fast and compact binary format for serialization and storage. arango
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
A shark is looking for a quick bite, so it
[ "eats some seeds", "eats a sandbar", "eats some seaweed", "eats an eel" ]
Key fact: carnivores are predators
D
3
openbookqa
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
grazers, such as sea urchins, are organisms that feed on available plants. sea urchins are omnivorous, eating both plant and animals. the sea urchin mainly feeds on algae on the coral and rocks, along with decomposing matter such as dead fish, mussels, sponges, and barnacles.
# # dine. minerals are also obtained from the diet. interestingly,.
What is the source of energy for physical cycles on Earth?
[ "the closest planet to earth", "the closest yellow dwarf star", "the seven different oceans", "various gas powered engines" ]
Key fact: the sun is the source of energy for physical cycles on Earth
B
1
openbookqa
nevertheless, the parameters of the second planet are still highly uncertain. on the other hand, the catalog of nearby exoplanets gives a period of 2, 190 days, which would put the planets close to a 2 : 1 ratio of orbital periods, though the reference for these parameters is uncertain : the original fischer et al. paper is cited as a reference in spite of the fact that it gives different parameters, though this solution has been adopted by the extrasolar planets encyclopaedia. in 2010, the discovery of a third planet ( 47 uma d ) was made by using the bayesian kepler periodogram. using this model of this planetary system it was determined that it is 100, 000 times more likely to have three planets than two planets.
mercury is the smallest planet and the closest to the sun. it has an extremely thin atmosphere so surface temperatures range from very hot to very cold. like the moon, it is covered with craters.
backyard worlds : planet 9 is a nasa - funded citizen science project which is part of the zooniverse web portal. it aims to discover new brown dwarfs, faint objects that are less massive than stars, some of which might be among the nearest neighbors of the solar system, and might conceivably detect the hypothesized planet nine. the project's principal investigator is marc kuchner, an astrophysicist at nasa's goddard space flight center. origins backyard worlds was launched in february 2017, shortly before the 87th anniversary of the discovery of pluto, which until its reclassification as a dwarf planet in 2006 was considered the solar system's ninth major planet. since that reclassification, evidence has come to light that there may be another planet located in the outer region of the solar system far beyond the kuiper belt, most commonly referred to as planet nine. this hypothetical new planet would be located so far from the sun that it would reflect only a very small amount of visible light, rendering it too faint to be detected in most astronomical surveys conducted to date. however, models of the conjectured planet's atmosphere suggest that methane condensation could in some cases make it detectable in infrared images captured by the wide - field infrared survey explorer ( wise ) space telescope. due to the effects of proper motion and parallax, planet nine would appear to move in a distinctive way between images taken of the same patch of sky at different times.
If the days are more chilled than before, and yet still avoid freezing degrees, a likely assumption is that
[ "daylight lasts longer", "daylight has lessened", "daylight is brighter", "nights are shorter" ]
Key fact: when the seasons change from the summer to the fall , the amount of daylight will decrease
B
1
openbookqa
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
a circadian clock, or circadian oscillator, also known as ones internal alarm clock is a biochemical oscillator that cycles with a stable phase and is synchronized with solar time. such a clock's in vivo period is necessarily almost exactly 24 hours ( the earth's current solar day ). in most living organisms, internally synchronized circadian clocks make it possible for the organism to anticipate daily environmental changes corresponding with the daynight cycle and adjust its biology and behavior accordingly. the term circadian derives from the latin circa ( about ) dies ( a day ), since when taken away from external cues ( such as environmental light ), they do not run to exactly 24 hours. clocks in humans in a lab in constant low light, for example, will average about 24. 2 hours per day, rather than 24 hours exactly. the normal body clock oscillates with an endogenous period of exactly 24 hours, it entrains, when it receives sufficient daily corrective signals from the environment, primarily daylight and darkness. circadian clocks are the central mechanisms that drive circadian rhythms. they consist of three major components : a central biochemical oscillator with a period of about 24 hours that keeps time ; a series of input pathways to this central oscillator to allow entrainment of the clock ; a series of output pathways tied to distinct phases of the oscillator that regulate overt rhythms in biochemistry, physiology, and
diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. the common adjective used for daytime activity is " diurnal ". the timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. diurnality is a cycle of activity within a 24 - hour period ; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral. plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. the timing of flower opening is often related to the time at which preferred pollinators are foraging. for example, sunflowers open during the day to attract bees, whereas the night - blooming cereus opens at night to attract large sphinx moths. animals many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. commonly classified diurnal animals include mammals, birds, and reptiles. most primates are diurnal, including humans. scientifically classifying diurnality within animals
An indian hawthorn that has received more water will usually be
[ "taller", "older", "less healthy", "colder" ]
Key fact: as the amount of water received by a plant increases , that plant will usually grow
A
0
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the hospital records database is a database provided by the wellcome trust and uk national archives which provides information on the existence and location of the records of uk hospitals. this includes the location and dates of administrative and clinical records, the existence of catalogues, and links to some online hospital catalogues. the website was proposed as a resource of the month by the royal society of medicine in 2009 references external links hospital records database smart clinics
In order for trees to use sunlight they must
[ "their roots", "the light switch", "CPR", "respire" ]
Key fact: living things require respiration to use energy
D
3
openbookqa
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
the sensors show when patient is asleep and awake, and transmit data used to determine when patient is in rem sleep. the nap trial begins when the lights are turned off. the patient is asked to perform simple tasks to test that the equipment is working properly.
continuity of care record ( ccr ) is a health record standard specification developed jointly by astm international, the massachusetts medical society ( mms ), the healthcare information and management systems society ( himss ), the american academy of family physicians ( aafp ), the american academy of pediatrics ( aap ), and other health informatics vendors. although there is no official " death " of the ccr standard announced anywhere, the ccr is effectively dead in any major industry use, with most organizations now transmitting documents and information with hl7 standards ( v2, cda / c - cda, or fhir ). another indication of its death is that the astm standard specification for ccr has not been updated since 2010. background and scope the ccr was generated by health care practitioners based on their views of the data they may want to share in any given situation. the ccr document is used to allow timely and focused transmission of information to other health professionals involved in the patient's care. the ccr aims to increase the role of the patient in managing their health and reduce error while improving continuity of patient care. the ccr standard is a patient health summary standard. it is a way to create flexible documents that contain the most relevant and timely core health information about a patient, and to send these electronically from one caregiver to another. the ccr's intent is also to create a standard of health information transportability when a patient is transferred or
An example of a chemical change is
[ "milk to yogurt", "mixing marbles", "my chemical romance", "shredding paper" ]
Key fact: An example of a chemical change is acid breaking down substances
A
0
openbookqa
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
A cardinal makes brief contact with a picnic table, and between them there is
[ "death", "transactions", "animosity", "abrasion" ]
Key fact: friction occurs when two object 's surfaces move against each other
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
A family moves into an old home that mice have moved into. Soon after the human family moves in, the family of mice are likely to
[ "be frozen", "be ejected", "be happy", "be welcomed" ]
Key fact: humans moving into an environment usually causes native species to lose their habitats
B
1
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
structured query language ( sql ) ( pronounced s - q - l ; or alternatively as " sequel " ) is a domain - specific language used to manage data, especially in a relational database management system ( rdbms ). it is particularly useful in handling structured data, i. e., data incorporating relations among entities and variables. introduced in the 1970s, sql offered two main advantages over older readwrite apis such as isam or vsam. firstly, it introduced the concept of accessing many records with one single command. secondly, it eliminates the need to specify how to reach a record, i. e., with or without an index. originally based upon relational algebra and tuple relational calculus, sql consists of many types of statements, which may be informally classed as sublanguages, commonly : data query language ( dql ), data definition language ( ddl ), data control language ( dcl ), and data manipulation language ( dml ). the scope of sql includes data query, data manipulation ( insert, update, and delete ), data definition ( schema creation and modification ), and data access control. although sql is essentially a declarative language ( 4gl ), it also includes procedural elements. sql was one of the first commercial languages to use edgar f. codd's relational model. the model was described in his influential 1970 paper, " a relational model of data for large shared data banks ". despite not entirely ad
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
Maps may be redrawn because of
[ "an avalanche", "a deep freeze", "a glacier", "an earthquake" ]
Key fact: an earthquake changes Earth 's surface quickly
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
Miletinae in the air above will have gone through a transmutation and would have already gone through what pre imago but post egg stage ?
[ "egg stage", "old stage", "stage after larva", "moth stage" ]
Key fact: the pupa stage is a stage in the metamorphosis process of some animals
C
2
openbookqa
beebase was an online bioinformatics database that hosted data related to apis mellifera, the european honey bee along with some pathogens and other species. it was developed in collaboration with the honey bee genome sequencing consortium. in 2020 it was archived and replaced by the hymenoptera genome database. data and services biological data and services available on beebase included : dna and protein sequence data official bee gene set ( developed by and hosted at beebase ) genome browser linkage maps server to search the honey bee genome using blast services in feb 2007, beebase consisted of a gbrowser - based genome viewer and a cmap - based comparative map viewer, both modules of the generic model organism database ( gmod ) project. the genome viewer included tracks for known honey bee genes, predicted gene sets ( ensembl, ncbi, embl - heidelberg ), sts markers ( solignac and hunt linkage maps ), honey bee expressed sequence tags ( ests ), homologs in fruit fly, mosquito and other insects and transposable elements. the honey bee comparative map viewer displayed linkage maps and the physical map ( genome assembly ), highlighting markers that are common among maps. additionally, a qtl viewer and a gene expression database were planned. the genome sequence was to serve as a reference to link these diverse data types. beebase organized the community annotation of the bee genome in collaboration with baylor college of medicine human genome sequencing center.
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
forensic entomology is the study of insects and their developmental stages to determine the time of death of a human body. the different stages of insect development used in forensic entomology are : 1. egg stage : insects, such as blowflies, lay their eggs on the decomposing body. the time it takes for the eggs to hatch can provide an estimation of the time of death. 2. larval stage : after hatching, the insects enter the larval stage, which consists of three instars ( sub - stages ). the size, weight, and development of the larvae can help estimate the time since death. each instar has a specific duration, and by identifying the instar, forensic entomologists can estimate the time elapsed since the eggs were laid. 3. pupal stage : after the larval stage, insects enter the pupal stage, during which they transform into adults. the color and development of the pupae can also be used to estimate the time of death. as the pupae mature, their color changes, and this can be used as an indicator of the time elapsed since the beginning of the pupal stage. 4. adult stage : the presence of adult insects on a body can also provide information about the time of death. the age of the adult insects, their mating status, and the presence of eggs can all be used to estimate the time since death. to determine the time of death using insect development, forensic en
Which of the following relies on vibrating matter to work?
[ "soda cans", "dog food", "baseball caps", "violas" ]
Key fact: vibrating matter can produce sound
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
In a bog, when water levels become low,
[ "bog animals may be thirsty", "bog animals may search longer for nutrients", "the bog produces more water", "bog predators need more prey" ]
Key fact: as available water in an environment decreases , the amount of available food in that environment will decrease
B
1
openbookqa
a bog or bogland is a wetland that accumulates peat as a deposit of dead plant materials often mosses, typically sphagnum moss. it is one of the four main types of wetlands. other names for bogs include mire, mosses, quagmire, and muskeg ; alkaline mires are called fens. a bayhead is another type of bog found in the forest of the gulf coast states in the united states. they are often covered in heath or heather shrubs rooted in the sphagnum moss and peat. the gradual accumulation of decayed plant material in a bog functions as a carbon sink. bogs occur where the water at the ground surface is acidic and low in nutrients. a bog usually is found at a freshwater soft spongy ground that is made up of decayed plant matter which is known as peat. they are generally found in cooler northern climates and are formed in poorly draining lake basins. in contrast to fens, they derive most of their water from precipitation rather than mineral - rich ground or surface water. water flowing out of bogs has a characteristic brown colour, which comes from dissolved peat tannins. in general, the low fertility and cool climate result in relatively slow plant growth, but decay is even slower due to low oxygen levels in saturated bog soils. hence, peat accumulates. large areas of the landscape can be covered many meters deep in peat. bogs have distinctive assemblages of animal
may not be able to use vision as their primary sense to find food. instead, they are more likely to use taste or chemical cues to find prey. wetlands wetlands are environments in which the soil is either permanently or periodically saturated with water. wetlands are different from lakes because wetlands are shallow bodies of water whereas lakes vary in depth. emergent vegetation consists of wetland plants that are rooted in the soil but have portions of leaves, stems, and flowers extending above the water ’ s surface. there are several types of wetlands including marshes, swamps, bogs, mudflats, and salt marshes ( figure 44. 25 ). the three shared characteristics among these types — what makes them wetlands — are their hydrology, hydrophytic vegetation, and hydric soils.
freshwater marshes and swamps are characterized by slow and steady water flow. bogs develop in depressions where water flow is low or nonexistent. bogs usually occur in areas where there is a clay bottom with poor percolation. percolation is the movement of water through the pores in the soil or rocks. the water found in a bog is stagnant and oxygen depleted because the oxygen that is used during the decomposition of organic matter is not replaced. as the oxygen in the water is depleted, decomposition slows. this leads to organic acids and other acids building up and lowering the ph of the water. at a lower ph, nitrogen becomes unavailable to plants. this creates a challenge for plants because nitrogen is an important limiting resource. some types of bog plants ( such as sundews, pitcher plants, and venus flytraps ) capture insects and extract the nitrogen from their bodies. bogs have low net primary productivity because the water found in bogs has low levels of nitrogen and oxygen.
In order for a crane to operate properly it requires
[ "sand", "the wind", "solar power", "a fulcrum" ]
Key fact: a lever is used for moving heavy objects
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
verde ( visualizing energy resources dynamically on the earth ) is a visualization and analysis capability of the united states department of energy ( doe ). the system, developed and maintained by oak ridge national laboratory ( ornl ), provides wide - area situational understanding of the u. s. electric grid. enabling grid monitoring, weather impacts prediction and analysis, verde supports preparedness and response to potentially large outage events. as a real - time geo - visualization capability, it characterizes the dynamic behavior of the grid over interconnects giving views into bulk transmission lines as well as county - level power distribution status. by correlating grid behaviors with cyber events, the platform also enables a method to link cyber - to - infrastructure dependencies. verde integrates different data elements from other available on - line services, databases, and social media. the tennessee valley authority ( tva ) and other major utilities spanning multiple regions across the electric grid interconnection provide real - time status of their systems. social media sources such as twitter provide additional data - sources for visualization and analyses. the verde software, which was developed by the computational sciences and engineering division ( csed ) of ornl, is used outside of the doe for a number of related national security requirements. references shankar, m., stovall, j., sorokine, a., bhaduri, b., king, t. ( date 2024 july 2008 ) power and energy society
tethys is an online knowledge management system that provides the marine renewable energy ( mre ) and wind energy communities with access to information and scientific literature on the environmental effects of devices. named after the greek titaness of the sea, the goal of the tethys database is to promote environmental stewardship and the advancement of the wind and marine renewable energy communities. the website has been developed by the pacific northwest national laboratory ( pnnl ) in support of the u. s. department of energy ( doe ) water power technologies office and wind energy technologies office. tethys hosts information and activities associated with two international collaborations known as oes - environmental and wren, formed to examine the environmental effects of marine renewable energy projects and wind energy projects, respectively. content overview as industry, academia, and government seek to develop new renewable energy sources from moving water and wind, there exists an opportunity to gather potential environmental effects of these technologies. tethys aims to evaluate and measure these effects to ensure that aquatic and avian animals, habitats, and ecosystem functions are not adversely affected, nor that important ocean and land uses are displaced. while these studies are presently scattered among different organizations, tethys creates a centralized hub where this information can be found. each document is labeled with an environmental stressor and receptor which categorize the type of potential harm and the affected area of the environment. the categories and the technology types covered are listed below : oes - environmental oes - environmental, formerly
If a tree falls then it is what?
[ "alive", "expired", "lush", "growing" ]
Key fact: if a tree falls then that tree is dead
B
1
openbookqa
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
what characterizes a cycle
[ "a steady recurrence", "none of these", "a stagnant pattern", "a circle shape" ]
Key fact: a cycle happens repeatedly
A
0
openbookqa
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
in descriptive statistics and chaos theory, a recurrence plot ( rp ) is a plot showing, for each moment j { \ displaystyle j } in time, the times at which the state of a dynamical system returns to the previous state at i { \ displaystyle i }, i. e., when the phase space trajectory visits roughly the same area in the phase space as at time j { \ displaystyle j }. in other words, it is a plot of x ( i ) x ( j ), { \ displaystyle { \ vec { x } } ( i ) \ approx { \ vec { x } } ( j ), } showing i { \ displaystyle i } on a horizontal axis and j { \ displaystyle j } on a vertical axis, where x { \ displaystyle { \ vec { x } } } is the state of the system ( or its phase space trajectory ). background natural processes can have a distinct recurrent behaviour, e. g. periodicities ( as seasonal or milankovich cycles ), but also irregular cyclicities ( as el nio southern oscillation, heart beat intervals ). moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time of divergence, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems ( cf. poincar recurrence theorem ). the recurrence
by identifying cyclical patterns.
A crowd-source worker wants to track when a certain task drops. They would
[ "use a notebook", "randomly play tracks", "buy expensive sneakers", "run for miles" ]
Key fact: An example of collecting data is measuring
A
0
openbookqa
it tracks styles, genres, and subgenres, along with the tone of the music and the platforms on which the music is sold. it then connects that data together, in a way that can intelligently tell you about an entire type of music, whether a massive genre like classical, or a tiny one like sadcore.
a vector database, vector store or vector search engine is a database that uses the vector space model to store vectors ( fixed - length lists of numbers ) along with other data items. vector databases typically implement one or more approximate nearest neighbor algorithms, so that one can search the database with a query vector to retrieve the closest matching database records. vectors are mathematical representations of data in a high - dimensional space. in this space, each dimension corresponds to a feature of the data, with the number of dimensions ranging from a few hundred to tens of thousands, depending on the complexity of the data being represented. a vector's position in this space represents its characteristics. words, phrases, or entire documents, as well as images, audio, and other types of data, can all be vectorized. these feature vectors may be computed from the raw data using machine learning methods such as feature extraction algorithms, word embeddings or deep learning networks. the goal is that semantically similar data items receive feature vectors close to each other. vector databases can be used for similarity search, semantic search, multi - modal search, recommendations engines, large language models ( llms ), object detection, etc. vector databases are also often used to implement retrieval - augmented generation ( rag ), a method to improve domain - specific responses of large language models. the retrieval component of a rag can be any search system, but is most often implemented as a vector database. text documents describing the domain of interest are collected,
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the gravitational pull between two objects increases as they are
[ "spun around", "brought together", "exposed to light", "moved apart" ]
Key fact: as distance from an object decreases , the the pull of gravity on that object increases
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
Which substance is likely present for the birth of a mountain?
[ "venom", "peat", "magma", "sunlight" ]
Key fact: mountains are formed by volcanoes
C
2
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
in nuclear data evaluation and validation, a library and a database serve different purposes, but both are essential for accurate predictions in theoretical nuclear reactor models. a nuclear data library is a collection of evaluated nuclear data files that contain information about various nuclear reactions, decay processes, and other relevant properties of atomic nuclei. these libraries are created through a rigorous evaluation process that combines experimental data, theoretical models, and statistical methods to provide the best possible representation of nuclear properties. some well - known nuclear data libraries include endf ( evaluated nuclear data file ), jeff ( joint evaluated fission and fusion ), and jendl ( japanese evaluated nuclear data library ). on the other hand, a nuclear database is a structured and organized collection of raw experimental and theoretical data related to nuclear reactions and properties. these databases store information from various sources, such as experimental measurements, theoretical calculations, and simulations. they serve as a primary source of information for nuclear data evaluators when creating nuclear data libraries. examples of nuclear databases include exfor ( experimental nuclear reaction data ), cinda ( computer index of nuclear data ), and ensdf ( evaluated nuclear structure data file ). the choice between a library and a database affects the accuracy of nuclear data predictions in a theoretical nuclear reactor model in several ways : 1. quality of data : nuclear data libraries contain evaluated data, which means they have undergone a thorough evaluation process to ensure their accuracy and reliability. in contrast, databases contain raw data that may not have been evaluated or validated.
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
crops need to be switched on a continual basis why?
[ "the animal habitats", "for more pests", "bees and birds", "more wanted vitamins" ]
Key fact: farming cause nutrients in the soil to decrease
D
3
openbookqa
beebase was an online bioinformatics database that hosted data related to apis mellifera, the european honey bee along with some pathogens and other species. it was developed in collaboration with the honey bee genome sequencing consortium. in 2020 it was archived and replaced by the hymenoptera genome database. data and services biological data and services available on beebase included : dna and protein sequence data official bee gene set ( developed by and hosted at beebase ) genome browser linkage maps server to search the honey bee genome using blast services in feb 2007, beebase consisted of a gbrowser - based genome viewer and a cmap - based comparative map viewer, both modules of the generic model organism database ( gmod ) project. the genome viewer included tracks for known honey bee genes, predicted gene sets ( ensembl, ncbi, embl - heidelberg ), sts markers ( solignac and hunt linkage maps ), honey bee expressed sequence tags ( ests ), homologs in fruit fly, mosquito and other insects and transposable elements. the honey bee comparative map viewer displayed linkage maps and the physical map ( genome assembly ), highlighting markers that are common among maps. additionally, a qtl viewer and a gene expression database were planned. the genome sequence was to serve as a reference to link these diverse data types. beebase organized the community annotation of the bee genome in collaboration with baylor college of medicine human genome sequencing center.
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
phenomicdb is a free phenotype oriented database. it contains data for some of the main model organisms such as homo sapiens, mus musculus, drosophila melanogaster, and others. phenomicdb merges and structures phenotypic data from various public sources : wormbase, flybase, ncbi gene, mgi and zfin using clustering algorithms. the website is now offline. references further reading groth p, kalev i, kirov i, traikov b, leser u, weiss b ( august 2010 ). " phenoclustering : online mining of cross - species phenotypes ". bioinformatics. 26 ( 15 ) : 19245. doi : 10. 1093 / bioinformatics / btq311. pmc 2905556. pmid 20562418. groth p, pavlova n, kalev i, tonov s, georgiev g, pohlenz hd, weiss b ( january 2007 ). " phenomicdb : a new cross - species genotype / phenotype resource ". nucleic acids research. 35 ( database issue ) : d6969. doi : 10. 1093 / nar / gkl662. pmc 1781118. pmid 16982638. kahraman a, avramov a, nashev lg, popov d,
Living things all require energy for what?
[ "dying", "observing", "staying perky", "decaying" ]
Key fact: living things all require energy for survival
C
2
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
Plants can use what as a pollinator?
[ "bees wax", "small extinct birds", "rhinos", "windy air" ]
Key fact: pollination is when wind carry pollen from one flower to another flower
D
3
openbookqa
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
beebase was an online bioinformatics database that hosted data related to apis mellifera, the european honey bee along with some pathogens and other species. it was developed in collaboration with the honey bee genome sequencing consortium. in 2020 it was archived and replaced by the hymenoptera genome database. data and services biological data and services available on beebase included : dna and protein sequence data official bee gene set ( developed by and hosted at beebase ) genome browser linkage maps server to search the honey bee genome using blast services in feb 2007, beebase consisted of a gbrowser - based genome viewer and a cmap - based comparative map viewer, both modules of the generic model organism database ( gmod ) project. the genome viewer included tracks for known honey bee genes, predicted gene sets ( ensembl, ncbi, embl - heidelberg ), sts markers ( solignac and hunt linkage maps ), honey bee expressed sequence tags ( ests ), homologs in fruit fly, mosquito and other insects and transposable elements. the honey bee comparative map viewer displayed linkage maps and the physical map ( genome assembly ), highlighting markers that are common among maps. additionally, a qtl viewer and a gene expression database were planned. the genome sequence was to serve as a reference to link these diverse data types. beebase organized the community annotation of the bee genome in collaboration with baylor college of medicine human genome sequencing center.
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
What prohibits cells from contorting into deformed shapes?
[ "helpful viruses help them retain their function", "a prison cell wall contains them", "the thin membrane which surrounds them", "they have a Chilton's manual to guide them" ]
Key fact: the cell membrane provides support for a cell
C
2
openbookqa
prokaryotic cells have a cell wall outside their plasma membrane.
the cell membrane ( also known as the plasma membrane or cytoplasmic membrane, and historically referred to as the plasmalemma ) is a biological membrane that separates and protects the interior of a cell from the outside environment ( the extracellular space ). the cell membrane consists of a lipid bilayer, made up of two layers of phospholipids with cholesterols ( a lipid component ) interspersed between them, maintaining appropriate membrane fluidity at various temperatures. the membrane also contains membrane proteins, including integral proteins that span the membrane and serve as membrane transporters, and peripheral proteins that loosely attach to the outer ( peripheral ) side of the cell membrane, acting as enzymes to facilitate interaction with the cell's environment. glycolipids embedded in the outer lipid layer serve a similar purpose. the cell membrane controls the movement of substances in and out of a cell, being selectively permeable to ions and organic molecules. in addition, cell membranes are involved in a variety of cellular processes such as cell adhesion, ion conductivity, and cell signalling and serve as the attachment surface for several extracellular structures, including the cell wall and the carbohydrate layer called the glycocalyx, as well as the intracellular network of protein fibers called the cytoskeleton. in the field of synthetic biology, cell membranes can be artificially reassembled. history robert hooke's discovery of cells in 1665 led to
cell physiology is the biological study of the activities that take place in a cell to keep it alive. the term physiology refers to normal functions in a living organism. animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. general characteristics there are two types of cells : prokaryotes and eukaryotes. prokaryotes were the first of the two to develop and do not have a self - contained nucleus. their mechanisms are simpler than later - evolved eukaryotes, which contain a nucleus that envelops the cell's dna and some organelles. prokaryotes prokaryotes have dna located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. there are two domains of prokaryotes : bacteria and archaea. prokaryotes have fewer organelles than eukaryotes. both have plasma membranes and ribosomes ( structures that synthesize proteins and float free in cytoplasm ). two unique characteristics of prokaryotes are fimbriae ( finger - like projections on the surface of a cell ) and flagella ( threadlike structures that aid movement ). eukaryotes eukaryotes have a nucleus where dna is contained. they are usually larger than prokaryotes and contain many more organelles. the nucleus, the feature of a eukaryote that distinguishes it from a
Robins will often devour
[ "rocks", "wood", "glue", "grasshoppers" ]
Key fact: birds sometimes eat insects
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
My dad gave me
[ "my head windows", "a building", "air", "the sun" ]
Key fact: offspring receive genes from their parents through DNA
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
What might be harder to digest?
[ "corn", "spinach", "water", "eggs" ]
Key fact: the breaking down of food into simple substances occurs in the digestive system
A
0
openbookqa
the eggnog database is a database of biological information hosted by the embl. it is based on the original idea of cogs ( clusters of orthologous groups ) and expands that idea to non - supervised orthologous groups constructed from numerous organisms. the database was created in 2007 and updated to version 4. 5 in 2015. eggnog stands for evolutionary genealogy of genes : non - supervised orthologous groups. references external links http : / / eggnogdb. embl. de
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos