question
string | options
list | rationale
string | label
string | label_idx
int64 | dataset
string | chunk1
string | chunk2
string | chunk3
string |
|---|---|---|---|---|---|---|---|---|
With the addition of thrusters your forward momentum will
|
[
"stop",
"increase",
"decrease",
"stall"
] |
Key fact:
a force continually acting on an object in the same direction that the object is moving can cause that object 's speed to increase in a forward motion
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
Nocturnal predators hunt when?
|
[
"sleep time",
"midday",
"morning",
"noon"
] |
Key fact:
nocturnal predators hunt during the night
|
A
| 0
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
a chronotype is the behavioral manifestation of an underlying circadian rhythm's myriad of physical processes. a person's chronotype is the propensity for the individual to sleep at a particular time during a 24 - hour period. eveningness ( delayed sleep period ; most active and alert in the evening ) and morningness ( advanced sleep period ; most active and alert in the morning ) are the two extremes with most individuals having some flexibility in the timing of their sleep period. however, across development there are changes in the propensity of the sleep period with pre - pubescent children preferring an advanced sleep period, adolescents preferring a delayed sleep period and many elderly preferring an advanced sleep period. humans are normally diurnal creatures that are active in the daytime. as with most other diurnal animals, human activity - rest patterns are endogenously regulated by biological clocks with a circadian ( ~ 24 - hour ) period. chronotypes have also been investigated in other species, such as fruit flies and mice. history physiology professor nathaniel kleitman's 1939 book sleep and wakefulness, revised 1963, summarized the existing knowledge of sleep and proposed the existence of a basic rest - activity cycle. kleitman, with his students including william c. dement and eugene aserinsky, continued his research throughout the 1900s. o. quist's 1970 thesis at the department of psychology, university of gteborg, sweden, marks the beginning of modern research into
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
What can genes do?
|
[
"Give a young goat hair that looks like its mother's hair",
"Make a baby chubby",
"Make a horse break its leg",
"Attack viruses and bacteria"
] |
Key fact:
genes are a vehicle for passing inherited characteristics from parent to offspring
|
A
| 0
|
openbookqa
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
When both a dominant and recessive gene are present, the dominate what will be visible?
|
[
"society",
"feature",
"person",
"path"
] |
Key fact:
when both a dominant and recessive gene are present , the dominant trait will be visible
|
B
| 1
|
openbookqa
|
a user profile in machine learning and data science is generally built to understand and predict user behaviors and preferences. it usually includes demographic information ( e. g., age, gender, location ) and historical interaction data ( e. g., past purchases, clicked items, browsing history ) to personalize experiences or to make recommendations. some systems also augment user profiles with external social media data, although this is not always the case.
|
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
What could be used as a conductor?
|
[
"a cat",
"A penny",
"a cloud",
"wood"
] |
Key fact:
sending electricity through a conductor causes electric current to flow through that conductor
|
B
| 1
|
openbookqa
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
sunlight is a heat source emitted from
|
[
"a white dwarf star",
"our only yellow star",
"a nearby quasar star",
"a red giant star"
] |
Key fact:
the sun is a source of heat called sunlight
|
B
| 1
|
openbookqa
|
we have a main sequence star nearby. our sun is on the main sequence, classified as a yellow dwarf. our sun has been a main sequence star for about 5 billion years. as a medium - sized star, it will continue to shine for about 5 billion more years. most stars are on the main sequence.
|
a red giant is a luminous giant star of low or intermediate mass ( roughly 0. 38 solar masses ( m ) ) in a late phase of stellar evolution. the outer atmosphere is inflated and tenuous, making the radius large and the surface temperature around 5, 000 k [ k ] ( 4, 700 c ; 8, 500 f ) or lower. the appearance of the red giant is from yellow - white to reddish - orange, including the spectral types k and m, sometimes g, but also class s stars and most carbon stars. red giants vary in the way by which they generate energy : most common red giants are stars on the red - giant branch ( rgb ) that are still fusing hydrogen into helium in a shell surrounding an inert helium core red - clump stars in the cool half of the horizontal branch, fusing helium into carbon in their cores via the triple - alpha process asymptotic - giant - branch ( agb ) stars with a helium burning shell outside a degenerate carbonoxygen core, and a hydrogen - burning shell just beyond that. many of the well - known bright stars are red giants because they are luminous and moderately common. the k0 rgb star arcturus is 36 light - years away, and gacrux is the nearest m - class giant at 88 light - years'distance. a red giant will usually produce a planetary nebula and become a white dwarf at the end of its life. characteristics a red giant is
|
stars are classified by color and temperature. the most common system uses the letters o ( blue ), b ( blue - white ), a ( white ), f ( yellow - white ), g ( yellow ), k ( orange ), and m ( red ), from hottest to coolest.
|
A waste product of human respiration
|
[
"is a vital resource to pigs",
"is a vital resource to daffodils",
"is a vital resource to oceans",
"is a vital resource to bees"
] |
Key fact:
In the respiration process carbon dioxide is a waste product
|
B
| 1
|
openbookqa
|
in biology and ecology, a resource is a substance or object in the environment required by an organism for normal growth, maintenance, and reproduction. resources can be consumed by one organism and, as a result, become unavailable to another organism. for plants key resources are light, nutrients, water, and space to grow. for animals key resources are food, water, and territory. key resources for plants terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal : carbon dioxide microsite ( ecology ) nutrients pollination seed dispersal soil water key resources for animals animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction : foraging territory water resources and ecological processes resource availability plays a central role in ecological processes : carrying capacity biological competition liebig's law of the minimum niche differentiation see also abiotic component biotic component community ecology ecology population ecology plant ecology size - asymmetric competition = = references = =
|
the miriam registry, a by - product of the miriam guidelines, is a database of namespaces and associated information that is used in the creation of uniform resource identifiers. it contains the set of community - approved namespaces for databases and resources serving, primarily, the biological sciences domain. these shared namespaces, when combined with'data collection'identifiers, can be used to create globally unique identifiers for knowledge held in data repositories. for more information on the use of uris to annotate models, see the specification of sbml level 2 version 2 ( and above ). a'data collection'is defined as a set of data which is generated by a provider. a'resource'is defined as a distributor of that data. such a description allows numerous resources to be associated with a single collection, allowing accurate representation of how biological information is available on the world wide web ; often the same information, from a single data collection, may be mirrored by different resources, or the core information may be supplemented with other data. data collection name : gene ontology data collection identifier : mir : 00000022 data collection synonyms : go data collection identifier pattern : ^ go : \ d { 7 } $ data collection namespace : urn : miriam : obo. go data collection'root url': http : / / identifiers. org / obo. go / data collection'root ur
|
a natural resource is anything in nature that humans need. metals and fossil fuels are natural resources. but so are water, sunlight, soil, and wind. even living things are natural resources.
|
Which of the following is most likely to make a person shiver?
|
[
"being in a gym",
"being in a sauna",
"being in a fridge",
"being in a pool"
] |
Key fact:
cool temperatures cause animals to shiver
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
your energy bar is an example. some of the chemical energy stored in the bar is absorbed into molecules your body uses.
|
if a tunnel had a modern facility for seeing, what can we infer from this?
|
[
"there is water in use",
"Thomas Edison's work is in use",
"there is sunlight in use",
"there is petrol in use"
] |
Key fact:
a light bulb requires electrical energy to produce light
|
B
| 1
|
openbookqa
|
over 90 % of the energy we use comes originally from the sun. every day, the sun provides the earth with almost 10, 000 times the amount of energy necessary to meet all of the world ’ s energy needs for that day. our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. plants and many bacteria capture solar energy through photosynthesis. we release the energy stored in plants when we burn wood or plant products such as ethanol. we also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. burning coal and petroleum also releases stored solar energy : these fuels are fossilized plant and animal matter. this chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes — an area called thermochemistry. the concepts introduced in this chapter are widely used in almost all scientific and technical fields. food scientists use them to determine the energy content of foods. biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. the oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate
|
about half the energy used in the u. s. is used in homes and for transportation. businesses, stores, and industry use the other half.
|
( t ) ata = fire ita = rock, stone, metal, y = water, river yby = earth, ground ybytu = air, wind
|
A plant needing to photosynthesize will want to be placed nearest to a
|
[
"fridge",
"bed",
"skylight",
"basement"
] |
Key fact:
a plant requires sunlight for photosynthesis
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database catalog of a database instance consists of metadata in which definitions of database objects such as base tables, views ( virtual tables ), synonyms, value ranges, indexes, users, and user groups are stored. it is an architecture product that documents the database's content and data quality. standards the sql standard specifies a uniform means to access the catalog, called the information _ schema, but not all databases follow this, even if they implement other aspects of the sql standard. for an example of database - specific metadata access methods, see oracle metadata. see also data dictionary data lineage data catalog vocabulary, a w3c standard for metadata metadata registry, central location where metadata definitions are stored and maintained metadata repository, a database created to store metadata = = references = =
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
What environment has low rainfall?
|
[
"tropics",
"sandy zone",
"sandbox",
"forests"
] |
Key fact:
a desert environment has low rainfall
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
Carbon dioxide exists where it does because
|
[
"humans expel it",
"deer eat it",
"birds use it",
"trees absorb it"
] |
Key fact:
carbon dioxide can be found in the air
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
The best way to start a fire is to use
|
[
"moisture deprived logs",
"old branches",
"green branches",
"chopped logs"
] |
Key fact:
dry wood easily burns
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
insidewood is an online resource and database for wood anatomy, serving as a reference, research, and teaching tool. wood anatomy is a sub - area within the discipline of wood science. this freely accessible database is purely scientific and noncommercial. it was created by nc state university libraries in 2004, using funds from nc state university and the national science foundation, with the donation of wood anatomy materials by several international researchers and members of the iawa, mostly botanists, biologists and wood scientists. contents the database contains categorized anatomical descriptions of wood based on the iawa list of microscopic features for hardwood and softwood identification, complemented by a comprehensive set of photomicrographs. as of november 2023, the database contained thousands of wood anatomical descriptions and nearly 66, 000 photomicrographs of contemporary woods, along with more than 1, 600 descriptions and 2, 000 images of fossil woods. its coverage is worldwide. hosted by north carolina state university libraries, this digital collection encompasses cites - listed timber species and other endangered woody plants. its significance lies in aiding wood identification through a multi - entry key, enabling searches based on the presence or absence of iawa features. additionally, it functions as a virtual reference collection, allowing users to retrieve descriptions and images by searching scientific or common names, or other relevant keywords. the whole database contains materials from over 10, 000 woody species and 200 plant families. initiator for this wood anatomy database has been the american botanist and wood scientist
|
in computer science, the log - structured merge - tree ( also known as lsm tree, or lsmt ) is a data structure with performance characteristics that make it attractive for providing indexed access to files with high insert volume, such as transactional log data. lsm trees, like other search trees, maintain key - value pairs. lsm trees maintain data in two or more separate structures, each of which is optimized for its respective underlying storage medium ; data is synchronized between the two structures efficiently, in batches. one simple version of the lsm tree is a two - level lsm tree. as described by patrick o'neil, a two - level lsm tree comprises two tree - like structures, called c0 and c1. c0 is smaller and entirely resident in memory, whereas c1 is resident on disk. new records are inserted into the memory - resident c0 component. if the insertion causes the c0 component to exceed a certain size threshold, a contiguous segment of entries is removed from c0 and merged into c1 on disk. the performance characteristics of lsm trees stem from the fact that each component is tuned to the characteristics of its underlying storage medium, and that data is efficiently migrated across media in rolling batches, using an algorithm reminiscent of merge sort. such tuning involves writing data in a sequential manner as opposed to as a series of separate random access requests. this optimization reduces seek time in hard - disk drives ( hdds ) and latency in solid
|
a cloudy day may obstruct visibility of which of these?
|
[
"the screen on a smartphone",
"our planet's closest star",
"the teacher in the class",
"the waitress's name tag"
] |
Key fact:
cloudy means the presence of clouds in the sky
|
B
| 1
|
openbookqa
|
screen time is the amount of time spent using an electronic device with a display screen such as a smartphone, computer, television, video game console, or tablet. the concept is under significant research with related concepts in digital media use and mental health. screen time is correlated with mental and physical harm in child development. the positive or negative health effects of screen time on a particular individual are influenced by levels and content of exposure. to prevent harmful excesses of screen time, some governments have placed regulations on usage. history statistics the first electronic screen was the cathode ray tube ( crt ), which was invented in 1922. crts were the most popular choice for display screens until the rise of liquid crystal displays ( lcds ) in the early 2000s. screens are now an essential part of entertainment, advertising, and information technologies. since their popularization in 2007, smartphones have become ubiquitous in daily life. in 2023, 85 % of american adults reported owning a smartphone. an american survey in 2016 found a median of 3. 7 minutes per hour screen use per citizen. all forms of screens are frequently used by children and teens. nationally representative data of children and teens in the united states show that the daily average of screen time increases with age. tv and video games were once largest contributors to children's screen time, but the past decade has seen a shift towards smart phones and tablets. specifically, a 2011 nationally representative survey of american parents of children from birth to age 8 suggests that tv
|
a surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware. the term " surface computer " was first adopted by microsoft for its pixelsense ( codenamed milan ) interactive platform, which was publicly announced on 30 may 2007. featuring a horizontally - mounted 30 - inch display in a coffee table - like enclosure, users can interact with the machine's graphical user interface by touching or dragging their fingertips and other physical objects such as paintbrushes across the screen, or by setting real - world items tagged with special bar - code labels on top of it. as an example, uploading digital files only requires each object ( e. g. a bluetooth - enabled digital camera ) to be placed on the unit's display. the resulting pictures can then be moved across the screen, or their sizes and orientation can be adjusted as well. pixelsense's internal hardware includes a 2. 0 ghz core 2 duo processor, 2gb of memory, an off the shelf graphics card, a scratch - proof spill - proof surface, a dlp projector, and five infrared cameras to detect touch, unlike the iphone, which uses a capacitive display. these expensive components resulted in a price tag of between $ 12, 500 to $ 15, 000 for the hardware. the first pixelsense units were used as information kiosks in the harrah's family of casinos
|
the human media lab ( hml ) is a research laboratory in human - computer interaction at queen's university's school of computing in kingston, ontario. its goals are to advance user interface design by creating and empirically evaluating disruptive new user interface technologies, and educate graduate students in this process. the human media lab was founded in 2000 by prof. roel vertegaal and employs an average of 12 graduate students. the laboratory is known for its pioneering work on flexible display interaction and paper computers, with systems such as paperwindows ( 2004 ), paperphone ( 2010 ) and papertab ( 2012 ). hml is also known for its invention of ubiquitous eye input, such as samsung's smart pause and smart scroll technologies. research in 2003, researchers at the human media lab helped shape the paradigm attentive user interfaces, demonstrating how groups of computers could use human social cues for considerate notification. amongst hml's early inventions was the eye contact sensor, first demonstrated to the public on abc good morning america. attentive user interfaces developed at the time included an early iphone prototype that used eye tracking electronic glasses to determine whether users were in a conversation, an attentive television that play / paused contents upon looking away, mobile smart pause and smart scroll ( adopted in samsung's galaxy s4 ) as well as a technique for calibration - free eye tracking by placing invisible infrared markers in the scene. current research at the human media lab focuses
|
Jane's hat flew off her head while standing still on a hilltop. This could be because
|
[
"her head blew the hat off",
"there was uneven heating of the ground",
"a squirrel jumped up and grabbed it off of her head",
"a spaceship pulled her hat off her head"
] |
Key fact:
uneven heating of the Earth 's surface cause wind
|
B
| 1
|
openbookqa
|
to analyze the sentence " the mouse lost a feather as it took off, " we can break it down into several linguistic levels : lexical, syntactic, semantic, and pragmatic. each of these levels examines different aspects of language and meaning, which can help determine the correctness of the sentence. * * 1. lexical level : * * the lexical level pertains to the words used in the sentence and their meanings. in this case, we must consider the words " mouse, " " lost, " " feather, " and " took off. " the word " mouse " typically refers to a small rodent, while " feather " is a term associated with birds. thus, at a lexical level, there is an apparent mismatch, as mice do not have feathers. this discrepancy suggests a potential issue with the correctness of the sentence at this level. * * 2. syntactic level : * * the syntactic level focuses on the structure and grammatical arrangement of the words in the sentence. the sentence follows a standard english structure with a subject ( " the mouse " ), a verb ( " lost " ), an object ( " a feather " ), and a subordinate clause ( " as it took off " ). from a syntactic perspective, the sentence is well - formed and adheres to english grammatical rules, indicating that it is correct at this level. * * 3. semantic level : *
|
a knowledge ark ( also known as a doomsday ark or doomsday vault ) is a collection of knowledge preserved in such a way that future generations would have access to said knowledge if all other copies of it were lost. scenarios where access to information ( such as the internet ) would become otherwise impossible could be described as existential risks or extinction - level events. a knowledge ark could take the form of a traditional library or a modern computer database. it could also be pictorial in nature, including photographs of important information, or diagrams of critical processes. a knowledge ark would have to be resistant to the effects of natural or man - made disasters in order to be viable. such an ark should include, but would not be limited to, information or material relevant to the survival and prosperity of human civilization. other types of knowledge arks might include genetic material, such as in a dna bank. with the potential for widespread personal dna sequencing becoming a reality, an individual might agree to store their genetic code in a digital or analog storage format which would enable later retrieval of that code. if a species was sequenced before extinction, its genome would still remain available for study. examples an example of a dna bank is the svalbard global seed vault, a seedbank which is intended to preserve a wide variety of plant seeds ( such as important crops ) in case of their extinction. the memory of mankind project involves engraving human knowledge on clay tablets and storing it in a salt mine. the engravings are microscopic
|
a figure, however, there could not have been, unless there were first a veritable body. an empty thing, or phantom, is incapable of a figure.
|
A bird is about to lay an egg, so it needs to construct a safe, round place to place the egg in. The bird constructs using
|
[
"sticks",
"gum",
"rocks",
"tape"
] |
Key fact:
a nest is made of branches
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
in which one of these classes are you most likely to find graphite?
|
[
"in a yoga class",
"in a philosophy class",
"in a physical education class",
"in a visual art class"
] |
Key fact:
pencil lead contains mineral graphite
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
in order to analyze the sentences provided, it is essential to understand the concepts of classes, instances, and properties as they pertain to ontology and knowledge representation. # # # classes classes represent categories or types of entities that share common characteristics. in the context of the sentences, classes would be general categories into which specific entities ( instances ) fall. for example : - * * gods * * : this is a class that encompasses all deities within a particular belief system or mythology. in the first sentence, " aphrodite and eros are gods, " this class encompasses both aphrodite and eros as members of the divine category. # # # instances instances are specific occurrences or examples of a class. they are particular entities that belong to a class. in the sentences provided : - * * aphrodite * * and * * eros * * : both of these names refer to specific deities in greek mythology. they serve as instances of the class " gods. " in the context of the second sentence, the relationship " aphrodite is a parent of eros " further specifies the connection between these two instances. # # # properties properties describe attributes or characteristics of instances, providing additional information about them. properties can be either qualitative or relational. in the sentences : - * * beautiful * * : this property describes an attribute of aphrodite, indicating her physical or aesthetic appeal. it is a qualitative property, providing insight into the
|
landis and koch ( 1977 ) gave the following table for interpreting κ { \ displaystyle \ kappa } values for a 2 - annotator 2 - class example. this table is however by no means universally accepted. they supplied no evidence to support it, basing it instead on personal opinion. it has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. for example, the kappa is higher when there are fewer categories.
|
What decreases in an environment as the amount of rain increases?
|
[
"solar light",
"water",
"rivers",
"hydration"
] |
Key fact:
as the amount of rain increases in an environment , available sunlight will decrease in that environment
|
A
| 0
|
openbookqa
|
a water pyramid or waterpyramid is a village - scale solar still, designed to distill water using solar energy for remote communities without easy access to clean, fresh water. it provides a means whereby communities can produce potable drinking water from saline, brackish or polluted water sources. history martijn nitzsche, an engineer from the netherlands, founded aqua - aero water systems to develop water treatment and purification systems. in the early 2000s, the company invented the waterpyramid technology. the first waterpyramid was engineered and installed in collaboration with mwh global, an international environmental engineering firm, in the country of gambia in 2005. the waterpyramid desalination systems were awarded the world bank development marketplace award in 2006. description the pyramid stands about 26 feet ( 7. 9 meters ) tall, 100 feet ( 30 meters ) in diameter, and has a conical shape. it is constructed of plastic sheeting, which is inflated using a fan powered by solar energy generated by the pyramid. within the pyramid, temperatures reach up to 167 f ( 75 c ), which evaporates water pumped into thin layer of water inside the cone. distilled water runs down the sides of the pyramid wall and is collected by gutters that feed into a collection tank. when sunshine is replaced by rain, the falling water is also collected around the edge of the base of the cone and stored for use in dry weather. each pyramid can desalinate approximately
|
the national hydrography dataset ( nhd ) is a digital database of surface water features used to make maps. it contains features such as lakes, ponds, streams, rivers, canals, dams, and stream gauges for the united states. description cartographers can link to or download the nhd to use in their computer mapping software. the nhd is used to represent surface water on maps and is also used to perform geospatial analysis. it is a digital vector geospatial dataset designed for use in geographic information systems ( gis ) to analyze the flow of water throughout the nation. the dataset represents over 7. 5 - million miles of streams / rivers and 6. 5 - million lake / ponds. mapping in mapping, the nhd is used with other data themes such as elevation, boundaries, and transportation to produce general reference maps. in geospatial analysis the nhd is used by scientists using gis technology. this takes advantage of a flow direction network that can be processed to trace the flow of water downstream. a rich set of attributes used to identify the water features includes an identifier, the official name of the feature, the length or area of the feature, and metadata describing the source of the data. the identifier is used in an addressing system to link specific information about the water such as water discharge, water quality, and fish population. using the basic water features, flow network, linked information, and other characteristics,
|
this is a list of solar energy topics. a air mass coefficient agrivoltaics artificial photosynthesis b bp solar brightsource energy building - integrated photovoltaics c carbon nanotubes in photovoltaics central solar heating plant community solar farm compact linear fresnel reflector concentrating photovoltaics concentrating solar power crookes radiometer d daylighting horace de saussure desertec drake landing solar community duck curve dye - sensitized solar cell e effect of sun angle on climate energy tower ( downdraft ) euro - solar programme european photovoltaic industry association f feed - in tariff first solar flip flap floating solar ( floatovoltaics ) fresnel reflector charles fritts calvin fuller g geomagnetic storm global dimming greenhouse growth of photovoltaics h halo ( optical phenomenon ) helioseismology heliostat home energy storage i indosolar insolation abram ioffe ise ( fraunhofer institute for solar energy systems ) ivanpah solar power facility j jinko solar l light tube list of photovoltaic power stations list of solar thermal power stations loanpal m magnetic sail auguste mouchout moura photovoltaic power station n nanocrystal solar cell net metering nevada solar one p parabolic reflector parabolic trough passive solar passive solar building design photoelectric effect photovoltaic array photovoltaic system photovoltaic thermal hybrid solar collector
|
What is a riverbank made of?
|
[
"oceans",
"loam",
"rivers",
"dirty clothing"
] |
Key fact:
a riverbank is made of soil
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
A heavier object
|
[
"requires less force to move",
"requires minimal effort to move",
"requires more muscle power to shift",
"requires a light touch to move"
] |
Key fact:
as the mass of an object increases , the force required to push that object will increase
|
C
| 2
|
openbookqa
|
with the help of muscles, joints allow the body to move with relatively little force.
|
skeletal muscles. skeletal muscles enable the body to move.
|
kinaesthetics ( or kinesthetics, in american english ) is the study of body motion, and of the perception ( both conscious and unconscious ) of one's own body motions. kinesthesis is the learning of movements that an individual commonly performs. the individual must repeat the motions that they are trying to learn and perfect many times for this to happen. while kinesthesis may be described as " muscle memory ", muscles do not store memory ; rather, it is the proprioceptors giving the information from muscles to the brain. to do this, the individual must have a sense of the position of their body and how that changes throughout the motor skill they are trying to perform. while performing the motion the body will use receptors in the muscles to transfer information to the brain to tell the brain about what the body is doing. then after completing the same motor skill numerous times, the brain will begin to remember the motion based on the position of the body at a given time. then, after learning the motion, the body will be able to perform the motor skill even when usual senses are inhibited, such as the person closing their eyes. the body will perform the motion based on the information that is stored in the brain from previous attempts at the same movement. this is possible because the brain has formed connections between the location of body parts in space ( the body uses perception to learn where their body is in space ) and the subsequent movements that commonly follow these positions
|
A puppy was uneducated on how to go through a doggy door until
|
[
"the mom did it",
"it read how to",
"it went to school",
"it made a plan"
] |
Key fact:
animals learn some behaviors from watching their parents
|
A
| 0
|
openbookqa
|
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
|
database design is the organization of data according to a database model. the designer determines what data must be stored and how the data elements interrelate. with this information, they can begin to fit the data to the database model. a database management system manages the data accordingly. database design is a process that consists of several steps. conceptual data modeling the first step of database design involves classifying data and identifying interrelationships. the theoretical representation of data is called an ontology or a conceptual data model. determining data to be stored in a majority of cases, the person designing a database is a person with expertise in database design, rather than expertise in the domain from which the data to be stored is drawn e. g. financial information, biological information etc. therefore, the data to be stored in a particular database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of the meaning of the data to be stored within the system. this process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. this is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. data to be stored can be determined by requirement specification. determining data relationships once a database designer is aware of the data which is to be
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
All of the following contain chloroplasts but this
|
[
"rose bushes",
"sea anemones",
"seaweed",
"algae"
] |
Key fact:
a plant cell contains chloroplasts
|
B
| 1
|
openbookqa
|
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
When I hear news of a warm front I make sure to bring
|
[
"game boy",
"clocks",
"guns",
"waterproof appendage covers"
] |
Key fact:
a warm front causes cloudy and rainy weather
|
D
| 3
|
openbookqa
|
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a data pack ( or fact pack ) is a pre - made database that can be fed to a software, such as software agents, game, internet bots or chatterbots, to teach information and facts, which it can later look up. in other words, a data pack can be used to feed minor updates into a system. introduction common data packs may include abbreviations, acronyms, dictionaries, lexicons and technical data, such as country codes, rfcs, filename extensions, tcp and udp port numbers, country calling codes, and so on. data packs may come in formats of csv and sql that can easily be parsed or imported into a database management system. the database may consist of a key - value pair, like an association list. data packs are commonly used within the video game industry to provide minor updates within their games. when a user downloads an update for a game they will be downloading loads of data packs which will contain updates for the game such as minor bug fixes or additional content. an example of a data pack used to update a game can be found on the references. example data pack a data pack datapack definition is similar to a data packet it contains loads of information ( data ) and stores it within a pack where the data can be compressed to reduce its file size. only certain programs can read a data pack therefore when the data is packed it is vital to know whether the receiving program is able to unpack the
|
Where is a portable way of creating light most useful?
|
[
"pitch-black caverns",
"sunny days",
"a bright rooms",
"the sun"
] |
Key fact:
a flashlight emits light
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
Which organism uses xylem for materials transport?
|
[
"saguaro cactus",
"liverwort",
"green algae",
"sphagnum moss"
] |
Key fact:
xylem transports materials through the plant
|
A
| 0
|
openbookqa
|
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
|
Which is more likely the result of a big earthquake
|
[
"a mountain",
"a big house",
"a modern airplane.",
"a fancy car"
] |
Key fact:
earthquakes cause rock layers to fold on top of each other
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. the most popular example of a database model is the relational model ( or the sql approximation of relational ), which uses a table - based format. common logical data models for databases include : navigational databases hierarchical database model network model graph database relational model entity – relationship model enhanced entity – relationship model object model document model entity – attribute – value model star schemaan object – relational database combines the two related structures. physical data models include : inverted index flat fileother models include : multidimensional model array model multivalue modelspecialized models are optimized for particular types of data : xml database semantic model content store event store time series model
|
A thermal insulator slows the transfer of what?
|
[
"warmness",
"light",
"energy",
"liquid"
] |
Key fact:
a thermal insulator slows the transfer of heat
|
A
| 0
|
openbookqa
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
The bear in the wild needs to find other animals to feast.
|
[
"they are killers",
"they only eat",
"they never kill",
"they are docile"
] |
Key fact:
lizards eat insects
|
A
| 0
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
If the part of a tree that contains chloroplasts has flatter surfaces they have more
|
[
"vibrant colors",
"absorbing mass",
"life",
"friends"
] |
Key fact:
as flatness of a leaf increases , the amount of sunlight that leaf can absorb will increase
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
flockdb was an open - source distributed, fault - tolerant graph database for managing wide but shallow network graphs. it was initially used by twitter to store relationships between users, e. g. followings and favorites. flockdb differs from other graph databases, e. g. neo4j in that it was not designed for multi - hop graph traversal but rather for rapid set operations, not unlike the primary use - case for redis sets. flockdb was posted on github shortly after twitter released its gizzard framework, which it used to query the flockdb distributed datastore. the database is licensed under the apache license. twitter no longer supports flockdb. see also gizzard ( scala framework ) references external links official website
|
If your dog is getting noticeably skinnier, you need to
|
[
"increase its food intake",
"play some video games",
"feed it less food",
"Make it fly away"
] |
Key fact:
as the amount of food an animal eats decreases , that organism will become thinner
|
A
| 0
|
openbookqa
|
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
|
food provides building materials for the body. the body needs building materials for growth and repair.
|
carbohydrates, proteins, and lipids contain energy. when your body digests food, it breaks down the molecules of these nutrients. this releases the energy so your body can use it.
|
Which two forces are likely the cause of canyons?
|
[
"water plus fire",
"fire and brimstone",
"water plus gravity",
"H20 and lemmings"
] |
Key fact:
most canyons are formed by flowing rivers through erosion over long periods of time
|
C
| 2
|
openbookqa
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
A tuna would prefer to consume
|
[
"An Apple",
"beef",
"Nemo",
"dogs"
] |
Key fact:
tuna eat fish
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
the vertebrate genome annotation ( vega ) database is a biological database dedicated to assisting researchers in locating specific areas of the genome and annotating genes or regions of vertebrate genomes. the vega browser is based on ensembl web code and infrastructure and provides a public curation of known vertebrate genes for the scientific community. the vega website is updated frequently to maintain the most current information about vertebrate genomes and attempts to present consistently high - quality annotation of all its published vertebrate genomes or genome regions. vega was developed by the wellcome trust sanger institute and is in close association with other annotation databases, such as zfin ( the zebrafish information network ), the havana group and genbank. manual annotation is currently more accurate at identifying splice variants, pseudogenes, polyadenylation features, non - coding regions and complex gene arrangements than automated methods. history the vertebrate genome annotation ( vega ) database was first made public in 2004 by the wellcome trust sanger institute. it was designed to view manual annotations of human, mouse and zebrafish genomic sequences, and it is the central cache for genome sequencing centers to deposit their annotation of human chromosomes. manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. annotation tools
|
Who can hear sounds?
|
[
"boulders",
"giraffes",
"rocks",
"stone statues"
] |
Key fact:
when sound reaches the ear , that sound can be heard
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
What best describes the relationship with the moon, Earth, and the sun?
|
[
"the Earth is absorbing sunlight",
"the moon is equidistant from the sun and Earth",
"the moon is a star",
"the sun travels around the Earth"
] |
Key fact:
the moon reflects sunlight towards the Earth
|
A
| 0
|
openbookqa
|
our sun is a star, a sphere of plasma held together by gravity. it is an ordinary star that is extraordinarily important. the sun provides light and heat to our planet. this star supports almost all life on earth.
|
the earth, moon and sun are linked together in space. monthly or daily cycles continually remind us of these links. every month, you can see the moon change. this is due to where it is relative to the sun and earth. in one phase, the moon is brightly illuminated - a full moon. in the opposite phase it is completely dark - a new moon. in between, it is partially lit up. when the moon is in just the right position, it causes an eclipse. the daily tides are another reminder of the moon and sun. they are caused by the pull of the moon and the sun on the earth. tides were discussed in the oceans chapter.
|
sunlight is the portion of the electromagnetic radiation which is emitted by the sun ( i. e. solar radiation ) and received by the earth, in particular the visible light perceptible to the human eye as well as invisible infrared ( typically perceived by humans as warmth ) and ultraviolet ( which can have physiological effects such as sunburn ) lights. however, according to the american meteorological society, there are " conflicting conventions as to whether all three [... ] are referred to as light, or whether that term should only be applied to the visible portion of the spectrum. " upon reaching the earth, sunlight is scattered and filtered through the earth's atmosphere as daylight when the sun is above the horizon. when direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat ( atmospheric ). when blocked by clouds or reflected off other objects, sunlight is diffused. sources estimate a global average of between 164 watts to 340 watts per square meter over a 24 - hour day ; this figure is estimated by nasa to be about a quarter of earth's average total solar irradiance. the ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a requisite for vitamin d3 synthesis and a mutagen. sunlight takes about 8. 3 minutes to reach earth from the surface of the sun. a photon starting at the center of the sun and changing direction every time it encounters a charged particle would take between 10
|
Magma pours out a volcano and what off a cliff
|
[
"drips",
"suspends",
"freezes",
"sticks"
] |
Key fact:
matter in the liquid state drips
|
A
| 0
|
openbookqa
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
A way to keep a cup of coffee warm is to
|
[
"cook it in the oven",
"heat it with a torch",
"put it in the sun",
"use a heated plate"
] |
Key fact:
a hot plate is a source of heat
|
D
| 3
|
openbookqa
|
heat from a wood fire can boil a pot of water. if you put an egg in the pot, you can eat a hard boiled egg in 15 minutes ( cool it down first! ). the energy to cook the egg was stored in the wood. the wood got that energy from the sun when it was part of a tree. the sun generated the energy by nuclear fusion. you started the fire with a match. the head of the match stores energy as chemical energy. that energy lights the wood on fire. the fire burns as long as there is energy in the wood. once the wood has burned up, there is no energy left in it. the fire goes out.
|
over 90 % of the energy we use comes originally from the sun. every day, the sun provides the earth with almost 10, 000 times the amount of energy necessary to meet all of the world ’ s energy needs for that day. our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. plants and many bacteria capture solar energy through photosynthesis. we release the energy stored in plants when we burn wood or plant products such as ethanol. we also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. burning coal and petroleum also releases stored solar energy : these fuels are fossilized plant and animal matter. this chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes — an area called thermochemistry. the concepts introduced in this chapter are widely used in almost all scientific and technical fields. food scientists use them to determine the energy content of foods. biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. the oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate
|
laboratory ovens are a common piece of equipment that can be found in electronics, materials processing, forensic, and research laboratories. these ovens generally provide pinpoint temperature control and uniform temperatures throughout the heating process. the following applications are some of the common uses for laboratory ovens : annealing, die - bond curing, drying or dehydrating, polyimide baking, sterilizing, evaporating. typical sizes are from one cubic foot to 0. 9 cubic metres ( 32 cu ft ). some ovens can reach temperatures that are higher than 300 degrees celsius. these temperatures are then applied from all sides of the oven to provide constant heat to sample. laboratory ovens can be used in numerous different applications and configurations, including clean rooms, forced convection, horizontal airflow, inert atmosphere, natural convection, and pass through. there are many types of laboratory ovens that are used throughout laboratories. standard digital ovens are mainly used for drying and heating processes while providing temperature control and safety. heavy duty ovens are used more in the industrial laboratories and provide testing and drying for biological samples. high temperature ovens are custom built and have additional insulation lining. this is needed for the oven due to its high temperatures that can reach up to 500 degrees celsius. other forms of the laboratory oven include vacuum ovens, forced air convection ovens, and gravity convection ovens. forensic labs use vacuum ovens that have been configured in specific ways to assist in
|
A pulley is used to do what with objects?
|
[
"crush",
"cool",
"increase altitude",
"elevate significance"
] |
Key fact:
a pulley is used for lifting objects
|
C
| 2
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
A decomposer might thrive more on
|
[
"Magic",
"Jupiter",
"Time Traveling",
"Old turkey"
] |
Key fact:
dead organisms are the source of nutrients for decomposers
|
D
| 3
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
jet propulsion laboratory development ephemeris ( abbreviated jpl de ( number ), or simply de ( number ) ) designates one of a series of mathematical models of the solar system produced at the jet propulsion laboratory in pasadena, california, for use in spacecraft navigation and astronomy. the models consist of numeric representations of positions, velocities and accelerations of major solar system bodies, tabulated at equally spaced intervals of time, covering a specified span of years. barycentric rectangular coordinates of the sun, eight major planets and pluto, and geocentric coordinates of the moon are tabulated. history there have been many versions of the jpl de, from the 1960s through the present, in support of both robotic and crewed spacecraft missions. available documentation is limited, but we know de69 was announced in 1969 to be the third release of the jpl ephemeris tapes, and was a special purpose, short - duration ephemeris. the then - current jpl export ephemeris was de19. these early releases were distributed on magnetic tape. in the days before personal computers, computers were large and expensive, and numerical integrations such as these were run by large organizations with ample resources. the jpl ephemerides prior to de405 were integrated on a univac mainframe in double precision. for instance, de102, which was created in 1977, took six million steps and ran for nine days on a univa
|
Which organism likely contains chlorophyll?
|
[
"bamboo",
"pandas",
"protozoa",
"humans"
] |
Key fact:
chlorophyll is used for absorbing light energy by plants
|
A
| 0
|
openbookqa
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
the eukaryotic pathogen vector and host database, or veupathdb, is a database of genomics and experimental data related to various eukaryotic pathogens. it was established in 2006 under a national institutes of health program to create bioinformatics resource centers to facilitate research on pathogens that may pose biodefense threats. veupathdb stores data related to its organisms of interest and provides tools for searching through and analyzing the data. it currently consists of 14 component databases, each dedicated to a certain research topic. veupathdb includes : genomics resources covering eukaryotic protozoan parasites host responses to parasite infection ( hostdb ) orthologs ( orthomcl ) clinical study data ( clinepidb ) microbiome data ( microbiomedb ) history veupathdb was established under the nih bioinformatics resource centers program as apidb, a resource meant to cover apicomplexan parasites. apidb originally consisted of component sites cryptodb ( for cryptosporidium ), plasmodb ( for plasmodium ), and toxodb ( for toxoplasma gondii ). as apidb grew to focus on eukaryotic pathogens beyond apicomplexans, the name was changed to eupathdb to support its broadened scope. eupathdb was the result of collaboration between many different parasitologists, including david roos,
|
A light bulb turns on when it receives energy from
|
[
"a cable",
"an oven",
"gasoline",
"a person"
] |
Key fact:
when electricity flows to a light bulb , the light bulb will come on
|
A
| 0
|
openbookqa
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
Magma is sourced in volcanoes and
|
[
"is high enough kelvin to melt steel",
"on the desert plains",
"is beneath the aliens",
"can freeze water at all times"
] |
Key fact:
volcanoes are often found under oceans
|
A
| 0
|
openbookqa
|
it has been postulated that surface ice may be responsible for these high luminosity levels, as the silicate rocks that compose most of the surface of mercury have exactly the opposite effect on luminosity. in spite of its proximity to the sun, mercury may have surface ice, since temperatures near the poles are constantly below freezing point : on the polar plains, the temperature does not rise above −106 °c. and craters at mercury's higher latitudes ( discovered by radar surveys from earth as well ) may be deep enough to shield the ice from direct sunlight.
|
when there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of mars. the loss of the atmosphere was accompanied by decreasing temperatures. part of the liquid water inventory sublimed and was transported to the poles, while the rest became trapped in permafrost, a subsurface ice layer. observations on earth and numerical modeling have shown that a crater - forming impact can result in the creation of a long - lasting hydrothermal system when ice is present in the crust.
|
ice is water that is frozen into a solid state, typically forming at or below temperatures of 0 c, 32 f, or 273. 15 k. it occurs naturally on earth, on other planets, in oort cloud objects, and as interstellar ice. as a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish - white color. virtually all of the ice on earth is of a hexagonal crystalline structure denoted as ice ih ( spoken as " ice one h " ). depending on temperature and pressure, at least nineteen phases ( packing geometries ) can exist. the most common phase transition to ice ih occurs when liquid water is cooled below 0 c ( 273. 15 k, 32 f ) at standard atmospheric pressure. when water is cooled rapidly ( quenching ), up to three types of amorphous ice can form. interstellar ice is overwhelmingly low - density amorphous ice ( lda ), which likely makes lda ice the most abundant type in the universe. when cooled slowly, correlated proton tunneling occurs below 253. 15 c ( 20 k, 423. 67 f ) giving rise to macroscopic quantum phenomena. ice is abundant on the earth's surface, particularly in the polar regions and above the snow line, where it can aggregate from snow to
|
how does an animal know to perform certain crucial life actions before exposure to it?
|
[
"it is built into their very being",
"it is taught in school",
"they are trained at a special school",
"they have magical powers"
] |
Key fact:
An example of an instinct is the kangaroo 's ability to crawl into its mother 's pouch to drink milk
|
A
| 0
|
openbookqa
|
their aim is to help students in a specific field of study. to do so, they build up a user model where they store information about abilities, knowledge and needs of the user. the system can now adapt to this user by presenting appropriate exercises and examples and offering hints and help where the user is most likely to need them.
|
their aim is to help students in a specific field of study. to do so, they build up a user model where they store information about abilities, knowledge and needs of the user. the system can now adapt to this user by presenting appropriate exercises and examples and offering hints and help where the user is most likely to need them.
|
learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. the ability to learn is possessed by humans, non - human animals, and some machines ; there is also evidence for some kind of learning in certain plants. some learning is immediate, induced by a single event ( e. g. being burned by a hot stove ), but much skill and knowledge accumulate from repeated experiences. the changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be " lost " from that which cannot be retrieved. human learning starts at birth ( it might even start before ) and continues until death as a consequence of ongoing interactions between people and their environment. the nature and processes involved in learning are studied in many established fields ( including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy ), as well as emerging fields of knowledge ( e. g. with a shared interest in the topic of learning from safety events such as incidents / accidents, or in collaborative learning health systems ). research in such fields has led to the identification of various sorts of learning. for example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. learning may occur consciously or without conscious awareness. learning that an aversive event cannot be avoided or escaped may result in a
|
Winter in the Northern Hemisphere means
|
[
"the Northern Hemisphere is experiencing scorching hot weather",
"the Northern Hemisphere is experiencing daily torrential rain",
"the Southern Hemisphere is experiencing warm sunny days",
"the Southern Hemisphere is experiencing frigid temperatures"
] |
Key fact:
winter in the Northern Hemisphere is during the summer in the Southern Hemisphere
|
C
| 2
|
openbookqa
|
the hemisphere that is tilted away from the sun is cooler because it receives less direct rays. as earth orbits the sun, the northern hemisphere goes from winter to spring, then summer and fall. the southern hemisphere does the opposite from summer to fall to winter to spring. when it is winter in the northern hemisphere, it is summer in the southern hemisphere, and vice versa.
|
the subarctic climate ( also called subpolar climate, or boreal climate ) is a continental climate with long, cold ( often very cold ) winters, and short, warm to cool summers. it is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50n to 70n, poleward of the humid continental climates. like other class d climates, they are rare in the southern hemisphere, only found at some isolated highland elevations. subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. these climates represent kppen climate classification dfc, dwc, dsc, dfd, dwd and dsd. description this type of climate offers some of the most extreme seasonal temperature variations found on the planet : in winter, temperatures can drop to below 50 c ( 58 f ) and in summer, the temperature may exceed 26 c ( 79 f ). however, the summers are short ; no more than three months of the year ( but at least one month ) must have a 24 - hour average temperature of at least 10 c ( 50 f ) to fall into this category of climate, and the coldest month should average below 0 c ( 32 f ) ( or 3 c ( 27 f ) ). record low temperatures can approach 70 c ( 94 f ). with 57 consecutive months when the average temperature is below freezing, all moisture
|
the earth is tilted on its axis ( figure above ). this means that as the earth rotates, one hemisphere has longer days with shorter nights. at the same time the other hemisphere has shorter days and longer nights. for example, in the northern hemisphere summer begins on june 21. on this date, the north pole is pointed directly toward the sun. this is the longest day and shortest night of the year in the northern hemisphere. the south pole is pointed away from the sun. this means that the southern hemisphere experiences its longest night and shortest day ( figure below ).
|
A coal mine is what?
|
[
"a person who mines for coal",
"a rare type of stone",
"a place where coal is processed",
"a mine that is beneath the earth where coal is found"
] |
Key fact:
coal mine is a source of coal under the ground
|
D
| 3
|
openbookqa
|
mining is the extraction of valuable geological materials and minerals from the surface of the earth. mining is required to obtain most materials that cannot be grown through agricultural processes, or feasibly created artificially in a laboratory or factory. ores recovered by mining include metals, coal, oil shale, gemstones, limestone, chalk, dimension stone, rock salt, potash, gravel, and clay. the ore must be a rock or mineral that contains valuable constituent, can be extracted or mined and sold for profit. mining in a wider sense includes extraction of any non - renewable resource such as petroleum, natural gas, or even water. modern mining processes involve prospecting for ore bodies, analysis of the profit potential of a proposed mine, extraction of the desired materials, and final reclamation or restoration of the land after the mine is closed. mining materials are often obtained from ore bodies, lodes, veins, seams, reefs, or placer deposits. the exploitation of these deposits for raw materials is dependent on investment, labor, energy, refining, and transportation cost. mining operations can create a negative environmental impact, both during the mining activity and after the mine has closed. hence, most of the world's nations have passed regulations to decrease the impact ; however, the outsized role of mining in generating business for often rural, remote or economically depressed communities means that governments often fail to fully enforce such regulations. work safety has long been a concern as well, and where enforced, modern practices have significantly
|
a field is a mineral deposit containing a metal or other valuable resources in a cost - competitive concentration. it is usually used in the context of a mineral deposit from which it is convenient to extract its metallic component. the deposits are exploited by mining in the case of solid mineral deposits ( such as iron or coal ) and extraction wells in case of fluids ( such as oil, gas or brines ). description in geology and related fields a deposit is a layer of rock or soil with uniform internal features that distinguish it from adjacent layers. each layer is generally one of a series of parallel layers which lie one above the other, laid one on the other by natural forces. they may extend for hundreds of thousands of square kilometers of the earth's surface. the deposits are usually seen as a different color material groups or different structure exposed in cliffs, canyons, caves and river banks. individual agglomerates may vary in thickness from a few millimeters up to a kilometer or more. each cluster represents a specific type of deposit : flint river, sea sand, coal swamp, sand dunes, lava beds, etc. it can consist of layers of sediment, usually by marine or differentiations of certain minerals during cooling of magma or during metamorphosis of the previous rock. the mineral deposits are generally oxides, silicates and sulfates or metal not commonly concentrated in the earth's crust. the deposits must be machined to extract the metals in question from the waste rock and minerals from
|
coal is a solid hydrocarbon formed from decaying plant material over millions of years.
|
Preparing food at the proper temperatures
|
[
"is too much work and should be avoided",
"eradicates potential illness causing organisms",
"allows bacteria to flourish",
"leaves meat raw and under cooked"
] |
Key fact:
cooking food to proper temperatures protects against food poisoning by killing bacteria and viruses
|
B
| 1
|
openbookqa
|
bacterial contamination of foods can lead to digestive problems, an illness known as food poisoning. raw eggs and undercooked meats commonly carry the bacteria that can cause food poisoning. food poisoning can be prevented by cooking meat thoroughly, which kills most microbes, and washing surfaces that have been in contact with raw meat. washing your hands before and after handling food also helps prevent contamination.
|
bacterial contamination of foods can lead to digestive problems, an illness known as food poisoning. raw eggs and undercooked meats commonly carry the bacteria that can cause food poisoning. food poisoning can be prevented by cooking meat thoroughly and washing surfaces that have been in contact with raw meat. washing your hands before and after handling food also helps prevent contamination.
|
bacteria are responsible for many types of diseases in humans.
|
What requires nutrients for survival?
|
[
"sand",
"plastic",
"metal",
"an anaconda"
] |
Key fact:
an animal requires nutrients for survival
|
D
| 3
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
Sound can be used for communication by
|
[
"creatures",
"plants",
"water",
"planets"
] |
Key fact:
sound can be used for communication by animals
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
Other than sight bloodhounds can find a meal by
|
[
"social media",
"their phone",
"the internet",
"stench"
] |
Key fact:
smell is used for finding food by some animals
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
In a warm room, it is likely that the source of heat is
|
[
"a series of metal pipes along a wall",
"a small ceiling fan",
"a stove which is turned off",
"a pile of boxes"
] |
Key fact:
a radiator is a source of heat
|
A
| 0
|
openbookqa
|
in engineering and computing, " stovepipe system " is a pejorative term for a system that has the potential to share data or functionality with other systems but which does not do so. the term evokes the image of stovepipes rising above buildings, each functioning individually. a simple example of a stovepipe system is one that implements its own user ids and passwords, instead of relying on a common user id and password shared with other systems. stovepipes are systems procured and developed to solve a specific problem, characterized by a limited focus and functionality, and containing data that cannot be easily shared with other systems. a stovepipe system is generally considered an example of an anti - pattern, particularly found in legacy systems. this is due to the lack of code reuse, and resulting software brittleness due to potentially general functions only being used on limited input. however, in certain cases stovepipe systems are considered appropriate, due to benefits from vertical integration and avoiding dependency hell. for example, the microsoft excel team has avoided dependencies and even maintained its own c compiler, which helped it to ship on time, have high - quality code, and generate small, cross - platform code. see also not invented here reinventing the wheel stovepipe ( organisation ) = = references = =
|
a pipe is a tubular section or hollow cylinder, usually but not necessarily of circular cross - section, used mainly to convey substances which can flow liquids and gases ( fluids ), slurries, powders and masses of small solids. it can also be used for structural applications ; a hollow pipe is far stiffer per unit weight than the solid members. in common usage the words pipe and tube are usually interchangeable, but in industry and engineering, the terms are uniquely defined. depending on the applicable standard to which it is manufactured, pipe is generally specified by a nominal diameter with a constant outside diameter ( od ) and a schedule that defines the thickness. tube is most often specified by the od and wall thickness, but may be specified by any two of od, inside diameter ( id ), and wall thickness. pipe is generally manufactured to one of several international and national industrial standards. while similar standards exist for specific industry application tubing, tube is often made to custom sizes and a broader range of diameters and tolerances. many industrial and government standards exist for the production of pipe and tubing. the term " tube " is also commonly applied to non - cylindrical sections, i. e., square or rectangular tubing. in general, " pipe " is the more common term in most of the world, whereas " tube " is more widely used in the united states. both " pipe " and " tube " imply a level of rigidity and permanence, whereas
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
A thing which is measured, such as a bucket of salt, needs to first be
|
[
"evaded",
"burned",
"gathered",
"lost"
] |
Key fact:
An example of collecting data is measuring
|
C
| 2
|
openbookqa
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
a vulnerability database ( vdb ) is a platform aimed at collecting, maintaining, and disseminating information about discovered computer security vulnerabilities. the database will customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. a vdb will assign a unique identifier to each vulnerability cataloged such as a number ( e. g. 123456 ) or alphanumeric designation ( e. g. vdb - 2020 - 12345 ). information in the database can be made available via web pages, exports, or api. a vdb can provide the information for free, for pay, or a combination thereof. history the first vulnerability database was the " repaired security bugs in multics ", published by february 7, 1973 by jerome h. saltzer. he described the list as " a list of all known ways in which a user may break down or circumvent the protection mechanisms of multics ". the list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. the published list contained two local privilege escalation vulnerabilities and three local denial of service attacks. types of vulnerability databases major vulnerability databases such as the iss x - force database, symantec / securityfocus bid database, and the open source vulnerability database ( osvdb ) aggregate a broad range of publicly disclosed vulnerabilities, including common vu
|
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
|
Cycles of day and night occur how often in a day?
|
[
"5 times",
"singular instances",
"forty times",
"once a year"
] |
Key fact:
cycles of day and night occur once per day
|
B
| 1
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
the context of count data.
|
In order to assemble a bike, the following are needed with exception of?
|
[
"Nails",
"Bolts",
"Screws",
"Bars"
] |
Key fact:
a bicycle contains screws
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a vulnerability database ( vdb ) is a platform aimed at collecting, maintaining, and disseminating information about discovered computer security vulnerabilities. the database will customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. a vdb will assign a unique identifier to each vulnerability cataloged such as a number ( e. g. 123456 ) or alphanumeric designation ( e. g. vdb - 2020 - 12345 ). information in the database can be made available via web pages, exports, or api. a vdb can provide the information for free, for pay, or a combination thereof. history the first vulnerability database was the " repaired security bugs in multics ", published by february 7, 1973 by jerome h. saltzer. he described the list as " a list of all known ways in which a user may break down or circumvent the protection mechanisms of multics ". the list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. the published list contained two local privilege escalation vulnerabilities and three local denial of service attacks. types of vulnerability databases major vulnerability databases such as the iss x - force database, symantec / securityfocus bid database, and the open source vulnerability database ( osvdb ) aggregate a broad range of publicly disclosed vulnerabilities, including common vu
|
When would you want a radiator the most?
|
[
"winter",
"spring",
"fall",
"summer"
] |
Key fact:
a radiator is a source of heat
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
What kind of animal returns to the same beaches each year to give birth?
|
[
"saltwater crocodile",
"carnivorous bird",
"semiaquatic mammal",
"tiger shark"
] |
Key fact:
seals every year return to the same beaches to give birth
|
C
| 2
|
openbookqa
|
sharkbook is a global database for identifying and tracking sharks, particularly whale sharks, using uploaded photos and videos. in addition to identifying and tracking sharks, the site allows people to " adopt a shark " and get updates on specific animals. creation sharkbook is the result of collaboration between simon j pierce of the marine megafauna foundation and jason holmberg of wild me. the software is open source and is now being used by other biology projects. identification of individual sharks whale sharks have unique spot patterning on their sides, similar to a human fingerprint, which allows for individual identification. scuba divers around the world can photograph sharks and upload their identification photographs to the sharkbook website, supporting global research and conservation efforts. additionally, the software automatically searches social media sites like youtube and instagram to look for images of whale sharks and adds them to the database. sharkbook software uses special pattern - matching software to identify the unique spots on each shark. this software and algorithms were originally adapted from nasa star tracking software used on the hubble space telescope. this software uses a scale - invariant feature transform ( sift ) algorithm, which can cope with complications presented by highly variable spot patterns and low contrast photographs. purpose this citizen science tool is free to use by researchers worldwide. sharkbook represents a global initiative to centralize shark sightings and facilitate research on these vulnerable species. see also manta matcher - for manta rays flukebook - for whales and dolphins = = references = =
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
the interim register of marine and nonmarine genera ( irmng ) is a taxonomic database which attempts to cover published genus names for all domains of life ( also including subgenera in zoology ), from 1758 in zoology ( 1753 in botany ) up to the present, arranged in a single, internally consistent taxonomic hierarchy, for the benefit of biodiversity informatics initiatives plus general users of biodiversity ( taxonomic ) information. in addition to containing over 500, 000 published genus name instances as at july 2024 ( also including subgeneric names in zoology ), the database holds over 1. 7 million species names ( 1. 3 million listed as " accepted " ), although this component of the data is not maintained in as current or complete state as the genus - level holdings. irmng can be queried online for access to the latest version of the dataset and is also made available as periodic snapshots or data dumps for import / upload into other systems as desired. the database was commenced in 2006 at the then csiro division of marine and atmospheric research in australia and, since 2016, has been hosted at the flanders marine institute ( vliz ) in belgium. description irmng contains scientific names ( only ) of the genera ( plus zoological subgenera, see below ), a subset of species, and principal higher ranks of most plants, animals and other kingdoms, both living and extinct, within a standardized taxonomic hierarchy, with associated machine - readable information on habitat (
|
What does water taste like after a substance is dissolved in it?
|
[
"watery",
"the same",
"similar to object",
"full of life"
] |
Key fact:
dissolving a substance in water causes the water to taste like that substance
|
C
| 2
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database catalog of a database instance consists of metadata in which definitions of database objects such as base tables, views ( virtual tables ), synonyms, value ranges, indexes, users, and user groups are stored. it is an architecture product that documents the database's content and data quality. standards the sql standard specifies a uniform means to access the catalog, called the information _ schema, but not all databases follow this, even if they implement other aspects of the sql standard. for an example of database - specific metadata access methods, see oracle metadata. see also data dictionary data lineage data catalog vocabulary, a w3c standard for metadata metadata registry, central location where metadata definitions are stored and maintained metadata repository, a database created to store metadata = = references = =
|
Producers in the food chain
|
[
"are self sufficient",
"rely on predators",
"struggle to survive",
"decompose organisms"
] |
Key fact:
a producer produces its own food
|
A
| 0
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
organisms are not independent, they are interdependent. they cannot live alone ; they need other organisms to survive. the same is true for species. all species need other species to survive.
|
If a river is flowing down the east side of a hill, then the hill
|
[
"drops at a slow rate",
"drops sharply right there",
"drops sharply on the west",
"is evenly sloped everywhere"
] |
Key fact:
the slope of the land causes a river to flow in a particular direction
|
B
| 1
|
openbookqa
|
less dramatic types of mass wasting move earth materials slowly down a hillside. slump is the sudden movement of large blocks of rock and soil down a slope. ( figure below ). all the material moves together in big chunks. slumps may happen when a layer of slippery, wet clay is underneath the rock and soil on a hillside. or they may occur when a river ( or road ) undercuts a slope. slump leaves behind crescent - shaped scars on the hillside.
|
in statistics, signal processing, and econometrics, an unevenly ( or unequally or irregularly ) spaced time series is a sequence of observation time and value pairs ( tn, xn ) in which the spacing of observation times is not constant. unevenly spaced time series naturally occur in many industrial and scientific domains : natural disasters such as earthquakes, floods, or volcanic eruptions typically occur at irregular time intervals. in observational astronomy, measurements such as spectra of celestial objects are taken at times determined by weather conditions, availability of observation time slots, and suitable planetary configurations. in clinical trials ( or more generally, longitudinal studies ), a patient's state of health may be observed only at irregular time intervals, and different patients are usually observed at different points in time. wireless sensors in the internet of things often transmit information only when a state changes to conserve battery life. there are many more examples in climatology, ecology, high - frequency finance, geology, and signal processing. analysis a common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form of interpolation - most often linear - and then to apply existing methods for equally spaced data. however, transforming data in such a way can introduce a number of significant and hard to quantify biases, especially if the spacing of observations is highly irregular. ideally, unevenly spaced time series are analyzed in their unaltered form. however, most of the basic theory
|
in economics, a recession is a business cycle contraction that occurs when there is a period of broad decline in economic activity. recessions generally occur when there is a widespread drop in spending ( an adverse demand shock ). this may be triggered by various events, such as a financial crisis, an external trade shock, an adverse supply shock, the bursting of an economic bubble, or a large - scale anthropogenic or natural disaster ( e. g. a pandemic ). there is no official definition of a recession, according to the imf. in the united states, a recession is defined as " a significant decline in economic activity spread across the market, lasting more than a few months, normally visible in real gdp, real income, employment, industrial production, and wholesale - retail sales. " the european union has adopted a similar definition. in the united kingdom and canada, a recession is defined as negative economic growth for two consecutive quarters. governments usually respond to recessions by adopting expansionary macroeconomic policies, such as increasing money supply and decreasing interest rates or increasing government spending and decreasing taxation. definitions in a 1974 article by the new york times, commissioner of the bureau of labor statistics julius shiskin suggested that a rough translation of the bureau's qualitative definition of a recession into a quantitative one that almost anyone can use might run like this : in terms of duration declines in real gross national income ( gni ) for two consecutive quarters ; a decline in industrial production over a
|
When the sun rises, there is light and it is daytime. When is it night time?
|
[
"when its blue and Pink",
"when its down and lightless",
"when its seven and eight.",
"when its dead and gone"
] |
Key fact:
the sun rising and setting causes cycles of day and night
|
B
| 1
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
q is a programming language for array processing, developed by arthur whitney. it is proprietary software, commercialized by kx systems. q serves as the query language for kdb +, a disk based and in - memory, column - based database. kdb + is based on the language k, a terse variant of the language apl. q is a thin wrapper around k, providing a more readable, english - like interface. one of the use cases is financial time series analysis, as one could do inexact time matches. an example is to match the a bid and the ask before that. both timestamps slightly differ and are matched anyway. overview the fundamental building blocks of q are atoms, lists, and functions. atoms are scalars and include the data types numeric, character, date, and time. lists are ordered collections of atoms ( or other lists ) upon which the higher level data structures dictionaries and tables are internally constructed. a dictionary is a map of a list of keys to a list of values. a table is a transposed dictionary of symbol keys and equal length lists ( columns ) as values. a keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. the following code demonstrates the relationships of the data structures. expressions to evaluate appear prefixed with the q ) prompt, with the output of the evaluation shown beneath : these entities are manipulated
|
Fungi
|
[
"can do their food chain jobs without ingestion",
"act as predators in the food chain",
"are always safe to ingest",
"occupy the top of the food chain"
] |
Key fact:
In the food chain process fungi have the role of decomposer
|
A
| 0
|
openbookqa
|
food chains and food webs the term “ food chain ” is sometimes used metaphorically to describe human social situations. in this sense, food chains are thought of as a competition for survival, such as “ who eats whom? ” someone eats and someone is eaten. therefore, it is not surprising that in our competitive “ dog - eat - dog ” society, individuals who are considered successful are seen as being at the top of the food chain, consuming all others for their benefit, whereas the less successful are seen as being at the bottom. the scientific understanding of a food chain is more precise than in its everyday usage. in ecology, a food chain is a linear sequence of organisms through which nutrients and energy pass : primary producers, primary consumers, and higher - level consumers are used to describe ecosystem structure and dynamics. there is a single path through the chain. each organism in a food chain occupies what is called a trophic level. depending on their role as producers or consumers, species or groups of species can be assigned to various trophic levels. in many ecosystems, the bottom of the food chain consists of photosynthetic organisms ( plants and / or phytoplankton ), which are called primary producers. the organisms that consume the primary producers are herbivores : the primary consumers. secondary consumers are usually carnivores that eat the primary consumers. tertiary consumers are carnivores that eat other carnivores. higher - level consumers feed on the
|
food chains and food webs a food chain is a linear sequence of organisms through which nutrients and energy pass as one organism eats another ; the levels in the food chain are producers, primary consumers, higher - level consumers, and finally decomposers. these levels are used to describe ecosystem structure and dynamics. there is a single path through a food chain. each organism in a food chain occupies a specific trophic level ( energy level ), its position in the food chain or food web. in many ecosystems, the base, or foundation, of the food chain consists of photosynthetic organisms ( plants or phytoplankton ), which are called producers. the organisms that consume the producers are herbivores : the primary consumers. secondary consumers are usually carnivores that eat the primary consumers. tertiary consumers are carnivores that eat other carnivores. higher - level consumers feed on the next lower trophic levels, and so on, up to the organisms at the top of the food chain : the apex consumers. in the lake ontario food chain, shown in figure 20. 4, the chinook salmon is the apex consumer at the top of this food chain.
|
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
|
Succulents will die during winter months in canada without the aide of a
|
[
"glass structure",
"firehouse",
"smokehouse",
"bonfire"
] |
Key fact:
a greenhouse is used to protect plants by keeping them warm
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
Plants are like all other organisms, in that they need what to survive?
|
[
"sustenance",
"shoes",
"games",
"internet"
] |
Key fact:
an plant requires food for survival
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
Which type of energy is the most environmentally friendly?
|
[
"Coal",
"Petroleum",
"Natural Gas",
"Sunlight"
] |
Key fact:
solar energy is a renewable resource
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
|
MRIs can make
|
[
"fillings dance",
"great yogurt",
"programs for assembly",
"a mess"
] |
Key fact:
non-contact forces can affect objects that are not touching
|
A
| 0
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
A bird of prey is hunting a scaly meal. How might this creature avoid being spotted?
|
[
"Lure the hunter into a trap and eat it first",
"Close its eyes and hope for the best",
"Altering its hues to look like the leaves",
"Flee at breakneck speeds"
] |
Key fact:
hawks eat lizards
|
C
| 2
|
openbookqa
|
predation is a biological interaction in which one organism, the predator, kills and eats another organism, its prey. it is one of a family of common feeding behaviours that includes parasitism and micropredation ( which usually do not kill the host ) and parasitoidism ( which always does, eventually ). it is distinct from scavenging on dead prey, though many predators also scavenge ; it overlaps with herbivory, as seed predators and destructive frugivores are predators. predation behavior varies significantly depending on the organism. many predators, especially carnivores, have evolved distinct hunting strategies. pursuit predation involves the active search for and pursuit of prey, whilst ambush predators instead wait for prey to present an opportunity for capture, and often use stealth or aggressive mimicry. other predators are opportunistic or omnivorous and only practice predation occasionally. most obligate carnivores are specialized for hunting. they may have acute senses such as vision, hearing, or smell for prey detection. many predatory animals have sharp claws or jaws to grip, kill, and cut up their prey. physical strength is usually necessary for large carnivores such as big cats to kill larger prey. other adaptations include stealth, endurance, intelligence, social behaviour, and aggressive mimicry that improve hunting efficiency. predation has a powerful selective effect on prey, and the prey develops anti - predator adaptations such as warning colouration, alarm calls and other
|
a camera trap is a camera that is automatically triggered by motion in its vicinity, like the presence of an animal or a human being. it is typically equipped with a motion sensorusually a passive infrared ( pir ) sensor or an active infrared ( air ) sensor using an infrared light beam. camera traps are a type of remote cameras used to capture images of wildlife with as little human interference as possible. camera trapping is a method for recording wild animals when researchers are not present, and has been used in ecological research for decades. in addition to applications in hunting and wildlife viewing, research applications include studies of nest ecology, detection of rare species, estimation of population size and species richness, and research on habitat use and occupation of human - built structures. since the introduction of commercial infrared - triggered cameras in the early 1990s, their use has increased. with advancements in the quality of camera equipment, this method of field observation has become more popular among researchers. hunting has played an important role in development of camera traps, since hunters use them to scout for game. these hunters have opened a commercial market for the devices, leading to many improvements over time. application the great advantage of camera traps is that they can record very accurate data without disturbing the photographed animal. these data are superior to human observations because they can be reviewed by other researchers. they minimally disturb wildlife and can replace the use of more invasive survey and monitoring techniques such as live trap and release. they operate continually and silently, provide proof
|
observational learning explains how wolves know how to hunt as a group.
|
When a hurricane glides over a continent it
|
[
"runs for president",
"becomes an earthquake",
"increases in strength",
"decreases in strength"
] |
Key fact:
when a hurricane moves over land , that hurricane will decrease in strength
|
D
| 3
|
openbookqa
|
for decades scientists have had equipment that can measure earthquake magnitude. the earthquake magnitude is the energy released during the quake.
|
compare fractures and faults and define how they are related to earthquakes.
|
elastic rebound theory. stresses build on both sides of a fault. the rocks deform plastically as seen in time 2. when the stresses become too great, the rocks return to their original shape. to do this, the rocks move, as seen in time 3. this movement releases energy, creating an earthquake.
|
adobe works as an electrical
|
[
"paper weight",
"pie filler",
"chiller",
"anti-conductor"
] |
Key fact:
brick is an electrical insulator
|
D
| 3
|
openbookqa
|
diprodb is a database designed to collect and analyse thermodynamic, structural and other dinucleotide properties. see also protein database ( disambiguation ) references external links main site
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
A nightcrawler will most likely reside and consume nearest
|
[
"a rain cloud",
"an un-raked yard",
"a river rapids",
"a mountain avalanche"
] |
Key fact:
animals live and feed near their habitats
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
What would a sedimentary rock likely hold?
|
[
"a trilobyte",
"a cookie",
"a wheatgrass shake",
"a diner"
] |
Key fact:
nearly all fossils are found in sedimentary rock
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
evaporation is the first stage in the what cycle
|
[
"H2O",
"lunar",
"growth",
"menstrual"
] |
Key fact:
evaporation is when water is drawn back up into the air in the water cycle
|
A
| 0
|
openbookqa
|
the menstrual cycle is a monthly cycle of changes in the ovaries and uterus.
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
Why do companies heat up milk before they bottle it?
|
[
"the milk is probably sour",
"it tastes bad that way",
"small organisms could make you sick",
"the cow could get angry"
] |
Key fact:
pasteurization reduces the amount of bacteria in milk
|
C
| 2
|
openbookqa
|
harmful bacteria can enter your digestive system in food and make you sick. this is called foodborne illness or food poisoning. the bacteria, or the toxins they produce, may cause vomiting or cramping, in addition to the symptoms mentioned above. foodborne illnesses can also be caused by viruses and parasites. the most common foodborne illnesses happen within a few minutes to a few hours, and make you feel really sick, but last for only about a day or so. others can take longer for the illness to appear. some people believe that the taste of food will tell you if it is bad. as a rule, you probably should not eat bad tasting food, but many contaminated foods can still taste good.
|
some bacteria can contaminate food and cause food poisoning.
|
tremetol, a metabolic poison found in the white snake root plant, prevents the metabolism of lactate. when cows eat this plant, it is concentrated in the milk they produce. humans who consume the milk become ill. symptoms of this disease, which include vomiting, abdominal pain, and tremors, become worse after exercise. why do you think this is the case? alcohol fermentation another familiar fermentation process is alcohol fermentation ( figure 7. 15 ) that produces ethanol, an alcohol. the first chemical reaction of alcohol fermentation is the following ( co2 does not participate in the second reaction ) :.
|
If a plane is landing on a strip where someone is standing, the plane's blinking will look
|
[
"more intense",
"duller",
"more distant",
"more dim"
] |
Key fact:
as a source of light becomes closer , that source will appear brighter
|
A
| 0
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
What is undesirable in a vegetable garden?
|
[
"tomatoes",
"green peppers",
"corn",
"dandelions"
] |
Key fact:
if a weed is pulled then that weed is destroyed
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
|
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
|
Which is likely to spread seed?
|
[
"a car",
"a sun beam",
"a whale",
"a hummingbird"
] |
Key fact:
A bird is a pollinating animal
|
D
| 3
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
National parks have rules
|
[
"that curtail the growth of fragile animal species",
"that open the parks to mining of natural resources",
"that allow for littering",
"that protect vulnerable animal inhabitants int he parks"
] |
Key fact:
national parks limit hunting
|
D
| 3
|
openbookqa
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
nuisance wildlife management is the selective removal of problem individuals or populations of specific species of wildlife. other terms for the field include wildlife damage management, wildlife control, and animal damage control. some wild animal species may get used to human presence, causing property damage or risking the transfer of diseases ( zoonoses ) to humans or pets. many wildlife species coexist with humans very successfully, such as commensal rodents which have become more or less dependent on humans. common nuisance species wild animals that can cause problems in homes, gardens or yards include armadillos, skunks, boars, foxes, squirrels, snakes, rats, groundhogs, beavers, opossums, raccoons, bats, moles, deer, mice, coyotes, bears, ravens, seagulls, woodpeckers and pigeons. in the united states, some of these species are protected, such as bears, ravens, bats, deer, woodpeckers, and coyotes, and a permit may be required to control some species. conflicts between people and wildlife arise in certain situations, such as when an animal's population becomes too large for a particular area to support. human - induced changes in the environment will often result in increased numbers of a species. for example, piles of scrap building material make excellent sites where rodents can nest. food left out for household pets is often equally attractive to some wildlife species. in these situations, the wildlife have suitable food and habitat
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
which of these would make a better material for an electronic device component?
|
[
"a brown copper panel",
"a string of cotton",
"a coil of rubber",
"a strip of plastic"
] |
Key fact:
wiring requires an electrical conductor
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
There are creatures which, depending on species, have varying numbers of body parts. Arachnids have a certain number, while humans have a different number, and these numerous parts can be attributed to
|
[
"survival",
"environmental growth",
"inherited characteristics",
"developmental abilities"
] |
Key fact:
the number of body parts of an organism is an inherited characteristic
|
C
| 2
|
openbookqa
|
in bioinformatics, a gene disease database is a systematized collection of data, typically structured to model aspects of reality, in a way to comprehend the underlying mechanisms of complex diseases, by understanding multiple composite interactions between phenotype - genotype relationships and gene - disease mechanisms. gene disease databases integrate human gene - disease associations from various expert curated databases and text mining derived associations including mendelian, complex and environmental diseases. introduction experts in different areas of biology and bioinformatics have been trying to comprehend the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. for some illnesses, it has become apparent that it is the right amount of animosity is made for not enough to obtain an index of the disease - related genes but to uncover how disruptions of molecular grids in the cell give rise to disease phenotypes. moreover, even with the unprecedented wealth of information available, obtaining such catalogues is extremely difficult. genetic broadly speaking, genetic diseases are caused by aberrations in genes or chromosomes. many genetic diseases are developed from before birth. genetic disorders account for a significant number of the health care problems in our society. advances in the understanding of this diseases have increased both the life span and quality of life for many of those affected by genetic disorders. recent developments in bioinformatics and laboratory genetics have made possible the better delineation of certain malformation and mental retardation syndromes, so that their mode of inheritance
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
phenomicdb is a free phenotype oriented database. it contains data for some of the main model organisms such as homo sapiens, mus musculus, drosophila melanogaster, and others. phenomicdb merges and structures phenotypic data from various public sources : wormbase, flybase, ncbi gene, mgi and zfin using clustering algorithms. the website is now offline. references further reading groth p, kalev i, kirov i, traikov b, leser u, weiss b ( august 2010 ). " phenoclustering : online mining of cross - species phenotypes ". bioinformatics. 26 ( 15 ) : 19245. doi : 10. 1093 / bioinformatics / btq311. pmc 2905556. pmid 20562418. groth p, pavlova n, kalev i, tonov s, georgiev g, pohlenz hd, weiss b ( january 2007 ). " phenomicdb : a new cross - species genotype / phenotype resource ". nucleic acids research. 35 ( database issue ) : d6969. doi : 10. 1093 / nar / gkl662. pmc 1781118. pmid 16982638. kahraman a, avramov a, nashev lg, popov d,
|
A human can, merely by pushing on it, cause a visible alteration in the shape of a
|
[
"manly sculpture",
"inhumanly tall building",
"plastic dinner container",
"standard burning sun"
] |
Key fact:
if a flexible container is pushed on then that container will change shape
|
C
| 2
|
openbookqa
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
|
Standing in a canyon and yelling your name
|
[
"will cause the canyon to shake",
"will cause your name to reverberate thru the canyon",
"will cause complete silence",
"will cause animals to run up to you"
] |
Key fact:
echo is when sound reflects off of a surface
|
B
| 1
|
openbookqa
|
in arid regions, a mountain stream may flow onto flatter land. the stream comes to a stop rapidly. the deposits form an alluvial fan ( figure below ).
|
noise pollution, or sound pollution, is the propagation of noise or sound with potential harmful effects on humans and animals. the source of outdoor noise worldwide is mainly caused by machines, transport and propagation systems. poor urban planning may give rise to noise disintegration or pollution, side - by - side industrial, and residential buildings can result in noise pollution in the residential areas. some of the main sources of noise in residential areas include loud music, transportation ( traffic, rail, airplanes, etc. ), lawn care maintenance, construction, electrical generators, wind turbines, explosions, and people. documented problems associated with noise in urban environments go back as far as ancient rome. research suggests that noise pollution in the united states is the highest in low - income and racial minority neighborhoods, and noise pollution associated with household electricity generators is an emerging environmental degradation in many developing nations. high noise levels can contribute to cardiovascular effects in humans and an increased incidence of coronary artery disease. in animals, noise can increase the risk of death by altering predator or prey detection and avoidance, interfere with reproduction and navigation, and contribute to permanent hearing loss. noise assessment metrics of noise researchers measure noise in terms of pressure, intensity, and frequency. sound pressure level ( spl ) represents the amount of pressure relative to atmospheric pressure during sound wave propagation that can vary with time ; this is also known as the sum of the amplitudes of a wave. sound intensity, measured in watts per meters - squared, represents the flow of
|
a disturbance in ecology refers to events or forces, of non - biological origin, that cause significant changes in the structure and function of an ecosystem. these disturbances can be natural events such as hurricanes, wildfires, and floods, or anthropogenic activities such as deforestation. they often lead to sudden changes, disrupting the stability and possibly creating opportunities for various species to establish or change in abundance, subsequently influencing biodiversity and ecosystem dynamics.
|
An example of an offspring receiving a gene is
|
[
"cooking",
"driving",
"cartwheels",
"moles"
] |
Key fact:
offspring receive genes from their parents through DNA
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
|
A plane takes off from the ground, lights blazing, and flies into the sky. As the plane ascends and travels to its destination,
|
[
"the lights are brighter",
"the lights appear duller",
"the lights are closer",
"the lights are bigger"
] |
Key fact:
as distance from a source of light increases , that source of light will appear dimmer
|
B
| 1
|
openbookqa
|
led is located in diode section.
|
led is located in diode section.
|
led is located in diode section.
|
A cousin to the mole survives on
|
[
"plants",
"rocks",
"rats",
"rabbits"
] |
Key fact:
meadow voles eat plants
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
|
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
|
A plant needing to photosynthesize will best be able to
|
[
"in a roofless room",
"in a cardboard box",
"in a windowless room",
"in a car with tinted windows"
] |
Key fact:
a plant requires sunlight for photosynthesis
|
A
| 0
|
openbookqa
|
no room can be empty, so every box must have at least 1 object. so, it is a non ordered surjective distribution of 8 distinguishable objects ( k = 8 { \ displaystyle k = 8 } ) into 5 indistinguishable boxes ( n = 5 { \ displaystyle n = 5 } ). that is all we need to know to choose the right operation, and the result is : s ( 8, 5 ) = { 8 5 } = 1 5!
|
no room can be empty, so every box must have at least 1 object. so, it is a non ordered surjective distribution of 8 distinguishable objects ( k = 8 { \ displaystyle k = 8 } ) into 5 indistinguishable boxes ( n = 5 { \ displaystyle n = 5 } ). that is all we need to know to choose the right operation, and the result is : s ( 8, 5 ) = { 8 5 } = 1 5!
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
What is a likely product of a timber companies activities?
|
[
"the battery of your cell phone",
"the ruler in your backpack",
"the porcelain in your toilet",
"the bottle you drink from"
] |
Key fact:
timber companies cut down trees
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. the guideline describes how to measure the functional size of distinct components. data warehouse and big data is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data - processing application software.
|
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
|
Which would least refract light?
|
[
"a cardboard box",
"a bottle",
"a gem",
"a diamond"
] |
Key fact:
objects made of glass cause refraction of light
|
A
| 0
|
openbookqa
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a typical rabbit diet includes
|
[
"crickets",
"mice",
"fish",
"weeds"
] |
Key fact:
rabbits eat plants
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
changes in an environment cause plants to
|
[
"morph for continuation",
"boogie",
"bake cakes",
"take long naps"
] |
Key fact:
changes in an environment cause plants to adapt to survive
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
in computing, the countmin sketch ( cm sketch ) is a probabilistic data structure that serves as a frequency table of events in a stream of data. it uses hash functions to map events to frequencies, but unlike a hash table uses only sub - linear space, at the expense of overcounting some events due to collisions. the countmin sketch was invented in 2003 by graham cormode and s. muthu muthukrishnan and described by them in a 2005 paper. countmin sketch is an alternative to count sketch and ams sketch and can be considered an implementation of a counting bloom filter ( fan et al., 1998 ) or multistage - filter. however, they are used differently and therefore sized differently : a countmin sketch typically has a sublinear number of cells, related to the desired approximation quality of the sketch, while a counting bloom filter is more typically sized to match the number of elements in the set. data structure the goal of the basic version of the countmin sketch is to consume a stream of events, one at a time, and count the frequency of the different types of events in the stream. at any time, the sketch can be queried for the frequency of a particular event type i from a universe of event types u { \ displaystyle { \ mathcal { u } } }, and will return an estimate of this frequency that is within a certain distance of the true frequency, with a certain probability. the
|
the lazy caterer's sequence, more formally known as the central polygonal numbers, describes the maximum number of pieces of a disk ( a pancake or pizza is usually used to describe the situation ) that can be made with a given number of straight cuts. for example, three cuts across a pancake will produce six pieces if the cuts all meet at a common point inside the circle, but up to seven if they do not. this problem can be formalized mathematically as one of counting the cells in an arrangement of lines ; for generalizations to higher dimensions, see arrangement of hyperplanes. the analogue of this sequence in three dimensions is the cake numbers. formula and sequence the maximum number p of pieces that can be created with a given number of cuts n ( where n 0 ) is given by the formula p = n 2 + n + 2 2. { \ displaystyle p = { \ frac { n ^ { 2 } + n + 2 } { 2 } }. } using binomial coefficients, the formula can be expressed as p = 1 + ( n + 1 2 ) = ( n 0 ) + ( n 1 ) + ( n 2 ). { \ displaystyle p = 1 + { \ dbinom { n + 1 } { 2 } } = { \ dbinom { n } { 0 } } + { \ dbinom { n } { 1 } } + { \ dbinom { n } { 2 } }
|
A thing which moves very little over quite a lot of time is a
|
[
"pack of wolves",
"racing horses",
"falling rocks",
"giant ice brick"
] |
Key fact:
a glacier moves slowly
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
Unlike trees in the forest or a pond full of fish, metal is only available until
|
[
"it is depleted for good",
"it is found on catfish scales",
"it becomes necessary to conduct electricity long distances",
"it is used to make diamond rings"
] |
Key fact:
metal is a nonrenewable resource
|
A
| 0
|
openbookqa
|
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
After seeing her give birth, the zookeepers discovered that Harry the ____ was actually a girl.
|
[
"hen",
"hotcake",
"healfish",
"hare"
] |
Key fact:
mammals give birth to live young
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
What resources can be used over again?
|
[
"finite",
"fuels",
"sustainable",
"one use"
] |
Key fact:
renewable resources can be used over again
|
C
| 2
|
openbookqa
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
schema - agnostic databases or vocabulary - independent databases aim at supporting users to be abstracted from the representation of the data, supporting the automatic semantic matching between queries and databases. schema - agnosticism is the property of a database of mapping a query issued with the user terminology and structure, automatically mapping it to the dataset vocabulary. the increase in the size and in the semantic heterogeneity of database schemas bring new requirements for users querying and searching structured data. at this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. at the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows. description the evolution of data environments towards the consumption of data from multiple data sources and the growth in the schema size, complexity, dynamicity and decentralisation ( scodd ) of schemas increases the complexity of contemporary data management. the scodd trend emerges as a central data management concern in big data scenarios, where users and applications have a demand for more complete data, produced by independent data sources, under different semantic assumptions and contexts of use, which is the typical scenario for semantic web data applications. the evolution of databases in the direction of heterogeneous data environments strongly impacts the usability, semiotics and semantic assumptions behind existing data accessibility methods such as structured queries, key
|
A skunk produces a bad what?
|
[
"job",
"plastic",
"energy",
"nose experience"
] |
Key fact:
a skunk produces a bad odor
|
D
| 3
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
Over the years, the desert rat has evolved traits that help it live with low supplies of water, what is this an example of?
|
[
"Acquired statistics",
"Acquired interests",
"Acquired characteristics",
"Acquired heuristics"
] |
Key fact:
an organism 's environment affects that organism 's acquired characteristics
|
C
| 2
|
openbookqa
|
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
the context of count data.
|
what role does some plankton have that is similar to farmer in ohio?
|
[
"needs food",
"produces food",
"can get sick",
"lives in ocean"
] |
Key fact:
In the food chain process some types of plankton have the role of producer
|
B
| 1
|
openbookqa
|
phytoplankton are the primary producers in the ocean. they form the base of most marine food chains.
|
upwelling brings nutrients to the surface from the ocean floor. nutrients are important resources for ocean life. however, they aren ’ t the only resources on the ocean floor.
|
the ocean floor is home to many species of living things. some from shallow water are used by people for food. clams and some fish are among the many foods we get from the ocean floor. some living things on the ocean floor are sources of human medicines. for example, certain bacteria on the ocean floor produce chemicals that fight cancer.
|
When light energy enters a prism it emits all the colors by
|
[
"deflecting the light",
"reflecting the light",
"consuming the light",
"refracting the light"
] |
Key fact:
a rainbow is formed by refraction of light by splitting light into all different colors
|
D
| 3
|
openbookqa
|
refracting and reflecting telescopes are optical telescopes that use lenses to gather light.
|
transmitted light may be refracted or scattered. when does each process occur?.
|
almost all surfaces reflect some of the light that strikes them. the still water of the lake in figure above reflects almost all of the light that strikes it. the reflected light forms an image of nearby objects. an image is a copy of an object that is formed by reflected or refracted light.
|
The air was cold, so all night the sheep kept
|
[
"sleeping",
"shaking",
"jumping",
"running."
] |
Key fact:
shivering is when an animal creates heat by shaking to keep the body warm
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
|
If acorns are moved around a neighborhood, then the most reasonable culprit is
|
[
"snakes",
"ferrets",
"sharks",
"bees"
] |
Key fact:
An example of seed dispersal is is an animal gathering seeds
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
|
if the particles in an electric rig were immobile, what would result from that?
|
[
"there will be a shock current",
"the circuit will fail to power",
"there would be a short circuit",
"there would be current overload"
] |
Key fact:
electricity is made of moving charged particles
|
B
| 1
|
openbookqa
|
a short circuit ( sometimes abbreviated to short or s / c ) is an electrical circuit that allows a current to travel along an unintended path with no or very low electrical impedance. this results in an excessive current flowing through the circuit. the opposite of a short circuit is an open circuit, which is an infinite resistance ( or very high impedance ) between two nodes. definition a short circuit is an abnormal connection between two nodes of an electric circuit intended to be at different voltages. this results in an electric current limited only by the thvenin equivalent resistance of the rest of the network which can cause circuit damage, overheating, fire or explosion. although usually the result of a fault, there are cases where short circuits are caused intentionally, for example, for the purpose of voltage - sensing crowbar circuit protectors. in circuit analysis, a short circuit is defined as a connection between two nodes that forces them to be at the same voltage. in an'ideal'short circuit, this means there is no resistance and thus no voltage drop across the connection. in real circuits, the result is a connection with almost no resistance. in such a case, the current is limited only by the resistance of the rest of the circuit. examples a common type of short circuit occurs when the positive and negative terminals of a battery or a capacitor are connected with a low - resistance conductor, like a wire. with a low resistance in the connection, a high current will flow, causing
|
an electric circuit consists of one or two closed loops through which current can flow. it has a voltage source and a conductor and may have other devices such as lights and switches.
|
figure 20. 8 shows the schematic for a simple circuit. a simple circuit has a single voltage source and a single resistor. the wires connecting the voltage source to the resistor can be assumed to have negligible resistance, or their resistance can be included in r.
|
Which of the following is a learned behavior?
|
[
"thinking",
"cooking",
"hearing",
"breathing"
] |
Key fact:
animals learn some behaviors from watching their parents
|
B
| 1
|
openbookqa
|
computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment. overview in contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large - scale analytics ( " big data " ) rather than inferential statistics and small - cohort hypothesis testing. the aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. research and development to implement more effective diagnostics and treatments represent translational work that supports this aim. for people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e - health / mhealth apps. for care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway. the field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology. applications in computational audiology, models and
|
a think - aloud ( or thinking aloud ) protocol is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences ( e. g., reading, writing, translation research, decision making, and process tracing ). description think - aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. participants are asked to say whatever comes into their mind as they complete the task. this might include what they are looking at, thinking, doing, and feeling. this gives observers insight into the participant's cognitive processes ( rather than only their final product ), to make thought processes as explicit as possible during task performance. in a formal research protocol, all verbalizations are transcribed and then analyzed. in a usability testing context, observers are asked to take notes of what participants say and do, without attempting to interpret their actions and words, and especially noting places where they encounter difficulty. test sessions may be completed on participants own devices or in a more controlled setting. sessions are often audio - and video - recorded so that developers can go back and refer to what participants did and how they reacted. history the think - aloud method was introduced in the usability field by clayton lewis while he was at ibm, and is explained in task - centered user interface design : a practical introduction by lewis and john rieman. the method was developed based on the techniques of protocol analysis by k. ericsson and h. simon. however,
|
audification is an auditory display technique for representing a sequence of data values as sound. by definition, it is described as a " direct translation of a data waveform to the audible domain. " audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. various signal processing techniques are used to assess data features. the technique allows the listener to hear periodic components as frequencies. audification typically requires large data sets with periodic components. audification is most commonly applied to get the most direct and simple representation of data from sound and to convert it into a visual. in most cases it will always be used for taking sounds and breaking it down in a way that we can visually understand it and construct more data from it. history the idea of audification was introduced in 1992 by greg kramer, initially as a sonification technique. this was the beginning of audification, but is also why most people to this day still consider audification a type of sonification. the goal of audification is to allow the listener to audibly experience the results of scientific measurements or simulations. a 2007 study by sandra pauletto and andy hunt at the university of york suggested that users were able to detect attributes such as noise, repetitive elements, regular oscillations, discontinuities, and signal power in audification of time - series data to a degree comparable with visual inspection of spectrograms. applications applications include audification of seismic
|
Small animals will leave their habitat and look for new shelter when there is a
|
[
"less animals around",
"too much food",
"destruction",
"better food"
] |
Key fact:
rocks are a source of shelter for small animals in an environment
|
C
| 2
|
openbookqa
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
As water increases in an environment the number of aquatic animals such as zooplankton, nekton, and benthos will
|
[
"on the up",
"fall",
"stagnate",
"face extinction"
] |
Key fact:
a body of water is a source of water
|
A
| 0
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the facial recognition technology ( feret ) database is a dataset used for facial recognition system evaluation as part of the face recognition technology ( feret ) program. it was first established in 1993 under a collaborative effort between harry wechsler at george mason university and jonathon phillips at the army research laboratory in adelphi, maryland. the feret database serves as a standard database of facial images for researchers to use to develop various algorithms and report results. the use of a common database also allowed one to compare the effectiveness of different approaches in methodology and gauge their strengths and weaknesses. the facial images for the database were collected between december 1993 and august 1996, accumulating a total of 14, 126 images pertaining to 1, 199 individuals along with 365 duplicate sets of images that were taken on a different day. in 2003, the defense advanced research projects agency ( darpa ) released a high - resolution, 24 - bit color version of these images. the dataset tested includes 2, 413 still facial images, representing 856 individuals. the feret database has been used by more than 460 research groups and is managed by the national institute of standards and technology ( nist ). references external links official website about the gray - scale version official website about the color version more official information ieee transactions on pattern analysis and machine intelligence, vol. 22, no. 10, october 2000 more documents about feret
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.