question
string | options
sequence | rationale
string | label
string | label_idx
int64 | dataset
string | chunk1
string | chunk2
string | chunk3
string |
|---|---|---|---|---|---|---|---|---|
With the addition of thrusters your forward momentum will
|
[
"stop",
"increase",
"decrease",
"stall"
] |
Key fact:
a force continually acting on an object in the same direction that the object is moving can cause that object 's speed to increase in a forward motion
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
Nocturnal predators hunt when?
|
[
"sleep time",
"midday",
"morning",
"noon"
] |
Key fact:
nocturnal predators hunt during the night
|
A
| 0
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
a chronotype is the behavioral manifestation of an underlying circadian rhythm's myriad of physical processes. a person's chronotype is the propensity for the individual to sleep at a particular time during a 24 - hour period. eveningness ( delayed sleep period ; most active and alert in the evening ) and morningness ( advanced sleep period ; most active and alert in the morning ) are the two extremes with most individuals having some flexibility in the timing of their sleep period. however, across development there are changes in the propensity of the sleep period with pre - pubescent children preferring an advanced sleep period, adolescents preferring a delayed sleep period and many elderly preferring an advanced sleep period. humans are normally diurnal creatures that are active in the daytime. as with most other diurnal animals, human activity - rest patterns are endogenously regulated by biological clocks with a circadian ( ~ 24 - hour ) period. chronotypes have also been investigated in other species, such as fruit flies and mice. history physiology professor nathaniel kleitman's 1939 book sleep and wakefulness, revised 1963, summarized the existing knowledge of sleep and proposed the existence of a basic rest - activity cycle. kleitman, with his students including william c. dement and eugene aserinsky, continued his research throughout the 1900s. o. quist's 1970 thesis at the department of psychology, university of gteborg, sweden, marks the beginning of modern research into
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
What can genes do?
|
[
"Give a young goat hair that looks like its mother's hair",
"Make a baby chubby",
"Make a horse break its leg",
"Attack viruses and bacteria"
] |
Key fact:
genes are a vehicle for passing inherited characteristics from parent to offspring
|
A
| 0
|
openbookqa
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
When both a dominant and recessive gene are present, the dominate what will be visible?
|
[
"society",
"feature",
"person",
"path"
] |
Key fact:
when both a dominant and recessive gene are present , the dominant trait will be visible
|
B
| 1
|
openbookqa
|
a user profile in machine learning and data science is generally built to understand and predict user behaviors and preferences. it usually includes demographic information ( e. g., age, gender, location ) and historical interaction data ( e. g., past purchases, clicked items, browsing history ) to personalize experiences or to make recommendations. some systems also augment user profiles with external social media data, although this is not always the case.
|
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
What could be used as a conductor?
|
[
"a cat",
"A penny",
"a cloud",
"wood"
] |
Key fact:
sending electricity through a conductor causes electric current to flow through that conductor
|
B
| 1
|
openbookqa
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
sunlight is a heat source emitted from
|
[
"a white dwarf star",
"our only yellow star",
"a nearby quasar star",
"a red giant star"
] |
Key fact:
the sun is a source of heat called sunlight
|
B
| 1
|
openbookqa
|
we have a main sequence star nearby. our sun is on the main sequence, classified as a yellow dwarf. our sun has been a main sequence star for about 5 billion years. as a medium - sized star, it will continue to shine for about 5 billion more years. most stars are on the main sequence.
|
a red giant is a luminous giant star of low or intermediate mass ( roughly 0. 38 solar masses ( m ) ) in a late phase of stellar evolution. the outer atmosphere is inflated and tenuous, making the radius large and the surface temperature around 5, 000 k [ k ] ( 4, 700 c ; 8, 500 f ) or lower. the appearance of the red giant is from yellow - white to reddish - orange, including the spectral types k and m, sometimes g, but also class s stars and most carbon stars. red giants vary in the way by which they generate energy : most common red giants are stars on the red - giant branch ( rgb ) that are still fusing hydrogen into helium in a shell surrounding an inert helium core red - clump stars in the cool half of the horizontal branch, fusing helium into carbon in their cores via the triple - alpha process asymptotic - giant - branch ( agb ) stars with a helium burning shell outside a degenerate carbonoxygen core, and a hydrogen - burning shell just beyond that. many of the well - known bright stars are red giants because they are luminous and moderately common. the k0 rgb star arcturus is 36 light - years away, and gacrux is the nearest m - class giant at 88 light - years'distance. a red giant will usually produce a planetary nebula and become a white dwarf at the end of its life. characteristics a red giant is
|
stars are classified by color and temperature. the most common system uses the letters o ( blue ), b ( blue - white ), a ( white ), f ( yellow - white ), g ( yellow ), k ( orange ), and m ( red ), from hottest to coolest.
|
A waste product of human respiration
|
[
"is a vital resource to pigs",
"is a vital resource to daffodils",
"is a vital resource to oceans",
"is a vital resource to bees"
] |
Key fact:
In the respiration process carbon dioxide is a waste product
|
B
| 1
|
openbookqa
|
in biology and ecology, a resource is a substance or object in the environment required by an organism for normal growth, maintenance, and reproduction. resources can be consumed by one organism and, as a result, become unavailable to another organism. for plants key resources are light, nutrients, water, and space to grow. for animals key resources are food, water, and territory. key resources for plants terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal : carbon dioxide microsite ( ecology ) nutrients pollination seed dispersal soil water key resources for animals animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction : foraging territory water resources and ecological processes resource availability plays a central role in ecological processes : carrying capacity biological competition liebig's law of the minimum niche differentiation see also abiotic component biotic component community ecology ecology population ecology plant ecology size - asymmetric competition = = references = =
|
the miriam registry, a by - product of the miriam guidelines, is a database of namespaces and associated information that is used in the creation of uniform resource identifiers. it contains the set of community - approved namespaces for databases and resources serving, primarily, the biological sciences domain. these shared namespaces, when combined with'data collection'identifiers, can be used to create globally unique identifiers for knowledge held in data repositories. for more information on the use of uris to annotate models, see the specification of sbml level 2 version 2 ( and above ). a'data collection'is defined as a set of data which is generated by a provider. a'resource'is defined as a distributor of that data. such a description allows numerous resources to be associated with a single collection, allowing accurate representation of how biological information is available on the world wide web ; often the same information, from a single data collection, may be mirrored by different resources, or the core information may be supplemented with other data. data collection name : gene ontology data collection identifier : mir : 00000022 data collection synonyms : go data collection identifier pattern : ^ go : \ d { 7 } $ data collection namespace : urn : miriam : obo. go data collection'root url': http : / / identifiers. org / obo. go / data collection'root ur
|
a natural resource is anything in nature that humans need. metals and fossil fuels are natural resources. but so are water, sunlight, soil, and wind. even living things are natural resources.
|
Which of the following is most likely to make a person shiver?
|
[
"being in a gym",
"being in a sauna",
"being in a fridge",
"being in a pool"
] |
Key fact:
cool temperatures cause animals to shiver
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
your energy bar is an example. some of the chemical energy stored in the bar is absorbed into molecules your body uses.
|
if a tunnel had a modern facility for seeing, what can we infer from this?
|
[
"there is water in use",
"Thomas Edison's work is in use",
"there is sunlight in use",
"there is petrol in use"
] |
Key fact:
a light bulb requires electrical energy to produce light
|
B
| 1
|
openbookqa
|
over 90 % of the energy we use comes originally from the sun. every day, the sun provides the earth with almost 10, 000 times the amount of energy necessary to meet all of the world ’ s energy needs for that day. our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. plants and many bacteria capture solar energy through photosynthesis. we release the energy stored in plants when we burn wood or plant products such as ethanol. we also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. burning coal and petroleum also releases stored solar energy : these fuels are fossilized plant and animal matter. this chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes — an area called thermochemistry. the concepts introduced in this chapter are widely used in almost all scientific and technical fields. food scientists use them to determine the energy content of foods. biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. the oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate
|
about half the energy used in the u. s. is used in homes and for transportation. businesses, stores, and industry use the other half.
|
( t ) ata = fire ita = rock, stone, metal, y = water, river yby = earth, ground ybytu = air, wind
|
A plant needing to photosynthesize will want to be placed nearest to a
|
[
"fridge",
"bed",
"skylight",
"basement"
] |
Key fact:
a plant requires sunlight for photosynthesis
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database catalog of a database instance consists of metadata in which definitions of database objects such as base tables, views ( virtual tables ), synonyms, value ranges, indexes, users, and user groups are stored. it is an architecture product that documents the database's content and data quality. standards the sql standard specifies a uniform means to access the catalog, called the information _ schema, but not all databases follow this, even if they implement other aspects of the sql standard. for an example of database - specific metadata access methods, see oracle metadata. see also data dictionary data lineage data catalog vocabulary, a w3c standard for metadata metadata registry, central location where metadata definitions are stored and maintained metadata repository, a database created to store metadata = = references = =
|
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
|
What environment has low rainfall?
|
[
"tropics",
"sandy zone",
"sandbox",
"forests"
] |
Key fact:
a desert environment has low rainfall
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
Carbon dioxide exists where it does because
|
[
"humans expel it",
"deer eat it",
"birds use it",
"trees absorb it"
] |
Key fact:
carbon dioxide can be found in the air
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
The best way to start a fire is to use
|
[
"moisture deprived logs",
"old branches",
"green branches",
"chopped logs"
] |
Key fact:
dry wood easily burns
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
insidewood is an online resource and database for wood anatomy, serving as a reference, research, and teaching tool. wood anatomy is a sub - area within the discipline of wood science. this freely accessible database is purely scientific and noncommercial. it was created by nc state university libraries in 2004, using funds from nc state university and the national science foundation, with the donation of wood anatomy materials by several international researchers and members of the iawa, mostly botanists, biologists and wood scientists. contents the database contains categorized anatomical descriptions of wood based on the iawa list of microscopic features for hardwood and softwood identification, complemented by a comprehensive set of photomicrographs. as of november 2023, the database contained thousands of wood anatomical descriptions and nearly 66, 000 photomicrographs of contemporary woods, along with more than 1, 600 descriptions and 2, 000 images of fossil woods. its coverage is worldwide. hosted by north carolina state university libraries, this digital collection encompasses cites - listed timber species and other endangered woody plants. its significance lies in aiding wood identification through a multi - entry key, enabling searches based on the presence or absence of iawa features. additionally, it functions as a virtual reference collection, allowing users to retrieve descriptions and images by searching scientific or common names, or other relevant keywords. the whole database contains materials from over 10, 000 woody species and 200 plant families. initiator for this wood anatomy database has been the american botanist and wood scientist
|
in computer science, the log - structured merge - tree ( also known as lsm tree, or lsmt ) is a data structure with performance characteristics that make it attractive for providing indexed access to files with high insert volume, such as transactional log data. lsm trees, like other search trees, maintain key - value pairs. lsm trees maintain data in two or more separate structures, each of which is optimized for its respective underlying storage medium ; data is synchronized between the two structures efficiently, in batches. one simple version of the lsm tree is a two - level lsm tree. as described by patrick o'neil, a two - level lsm tree comprises two tree - like structures, called c0 and c1. c0 is smaller and entirely resident in memory, whereas c1 is resident on disk. new records are inserted into the memory - resident c0 component. if the insertion causes the c0 component to exceed a certain size threshold, a contiguous segment of entries is removed from c0 and merged into c1 on disk. the performance characteristics of lsm trees stem from the fact that each component is tuned to the characteristics of its underlying storage medium, and that data is efficiently migrated across media in rolling batches, using an algorithm reminiscent of merge sort. such tuning involves writing data in a sequential manner as opposed to as a series of separate random access requests. this optimization reduces seek time in hard - disk drives ( hdds ) and latency in solid
|
a cloudy day may obstruct visibility of which of these?
|
[
"the screen on a smartphone",
"our planet's closest star",
"the teacher in the class",
"the waitress's name tag"
] |
Key fact:
cloudy means the presence of clouds in the sky
|
B
| 1
|
openbookqa
|
screen time is the amount of time spent using an electronic device with a display screen such as a smartphone, computer, television, video game console, or tablet. the concept is under significant research with related concepts in digital media use and mental health. screen time is correlated with mental and physical harm in child development. the positive or negative health effects of screen time on a particular individual are influenced by levels and content of exposure. to prevent harmful excesses of screen time, some governments have placed regulations on usage. history statistics the first electronic screen was the cathode ray tube ( crt ), which was invented in 1922. crts were the most popular choice for display screens until the rise of liquid crystal displays ( lcds ) in the early 2000s. screens are now an essential part of entertainment, advertising, and information technologies. since their popularization in 2007, smartphones have become ubiquitous in daily life. in 2023, 85 % of american adults reported owning a smartphone. an american survey in 2016 found a median of 3. 7 minutes per hour screen use per citizen. all forms of screens are frequently used by children and teens. nationally representative data of children and teens in the united states show that the daily average of screen time increases with age. tv and video games were once largest contributors to children's screen time, but the past decade has seen a shift towards smart phones and tablets. specifically, a 2011 nationally representative survey of american parents of children from birth to age 8 suggests that tv
|
a surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware. the term " surface computer " was first adopted by microsoft for its pixelsense ( codenamed milan ) interactive platform, which was publicly announced on 30 may 2007. featuring a horizontally - mounted 30 - inch display in a coffee table - like enclosure, users can interact with the machine's graphical user interface by touching or dragging their fingertips and other physical objects such as paintbrushes across the screen, or by setting real - world items tagged with special bar - code labels on top of it. as an example, uploading digital files only requires each object ( e. g. a bluetooth - enabled digital camera ) to be placed on the unit's display. the resulting pictures can then be moved across the screen, or their sizes and orientation can be adjusted as well. pixelsense's internal hardware includes a 2. 0 ghz core 2 duo processor, 2gb of memory, an off the shelf graphics card, a scratch - proof spill - proof surface, a dlp projector, and five infrared cameras to detect touch, unlike the iphone, which uses a capacitive display. these expensive components resulted in a price tag of between $ 12, 500 to $ 15, 000 for the hardware. the first pixelsense units were used as information kiosks in the harrah's family of casinos
|
the human media lab ( hml ) is a research laboratory in human - computer interaction at queen's university's school of computing in kingston, ontario. its goals are to advance user interface design by creating and empirically evaluating disruptive new user interface technologies, and educate graduate students in this process. the human media lab was founded in 2000 by prof. roel vertegaal and employs an average of 12 graduate students. the laboratory is known for its pioneering work on flexible display interaction and paper computers, with systems such as paperwindows ( 2004 ), paperphone ( 2010 ) and papertab ( 2012 ). hml is also known for its invention of ubiquitous eye input, such as samsung's smart pause and smart scroll technologies. research in 2003, researchers at the human media lab helped shape the paradigm attentive user interfaces, demonstrating how groups of computers could use human social cues for considerate notification. amongst hml's early inventions was the eye contact sensor, first demonstrated to the public on abc good morning america. attentive user interfaces developed at the time included an early iphone prototype that used eye tracking electronic glasses to determine whether users were in a conversation, an attentive television that play / paused contents upon looking away, mobile smart pause and smart scroll ( adopted in samsung's galaxy s4 ) as well as a technique for calibration - free eye tracking by placing invisible infrared markers in the scene. current research at the human media lab focuses
|
Jane's hat flew off her head while standing still on a hilltop. This could be because
|
[
"her head blew the hat off",
"there was uneven heating of the ground",
"a squirrel jumped up and grabbed it off of her head",
"a spaceship pulled her hat off her head"
] |
Key fact:
uneven heating of the Earth 's surface cause wind
|
B
| 1
|
openbookqa
|
to analyze the sentence " the mouse lost a feather as it took off, " we can break it down into several linguistic levels : lexical, syntactic, semantic, and pragmatic. each of these levels examines different aspects of language and meaning, which can help determine the correctness of the sentence. * * 1. lexical level : * * the lexical level pertains to the words used in the sentence and their meanings. in this case, we must consider the words " mouse, " " lost, " " feather, " and " took off. " the word " mouse " typically refers to a small rodent, while " feather " is a term associated with birds. thus, at a lexical level, there is an apparent mismatch, as mice do not have feathers. this discrepancy suggests a potential issue with the correctness of the sentence at this level. * * 2. syntactic level : * * the syntactic level focuses on the structure and grammatical arrangement of the words in the sentence. the sentence follows a standard english structure with a subject ( " the mouse " ), a verb ( " lost " ), an object ( " a feather " ), and a subordinate clause ( " as it took off " ). from a syntactic perspective, the sentence is well - formed and adheres to english grammatical rules, indicating that it is correct at this level. * * 3. semantic level : *
|
a knowledge ark ( also known as a doomsday ark or doomsday vault ) is a collection of knowledge preserved in such a way that future generations would have access to said knowledge if all other copies of it were lost. scenarios where access to information ( such as the internet ) would become otherwise impossible could be described as existential risks or extinction - level events. a knowledge ark could take the form of a traditional library or a modern computer database. it could also be pictorial in nature, including photographs of important information, or diagrams of critical processes. a knowledge ark would have to be resistant to the effects of natural or man - made disasters in order to be viable. such an ark should include, but would not be limited to, information or material relevant to the survival and prosperity of human civilization. other types of knowledge arks might include genetic material, such as in a dna bank. with the potential for widespread personal dna sequencing becoming a reality, an individual might agree to store their genetic code in a digital or analog storage format which would enable later retrieval of that code. if a species was sequenced before extinction, its genome would still remain available for study. examples an example of a dna bank is the svalbard global seed vault, a seedbank which is intended to preserve a wide variety of plant seeds ( such as important crops ) in case of their extinction. the memory of mankind project involves engraving human knowledge on clay tablets and storing it in a salt mine. the engravings are microscopic
|
a figure, however, there could not have been, unless there were first a veritable body. an empty thing, or phantom, is incapable of a figure.
|
A bird is about to lay an egg, so it needs to construct a safe, round place to place the egg in. The bird constructs using
|
[
"sticks",
"gum",
"rocks",
"tape"
] |
Key fact:
a nest is made of branches
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
in which one of these classes are you most likely to find graphite?
|
[
"in a yoga class",
"in a philosophy class",
"in a physical education class",
"in a visual art class"
] |
Key fact:
pencil lead contains mineral graphite
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
in order to analyze the sentences provided, it is essential to understand the concepts of classes, instances, and properties as they pertain to ontology and knowledge representation. # # # classes classes represent categories or types of entities that share common characteristics. in the context of the sentences, classes would be general categories into which specific entities ( instances ) fall. for example : - * * gods * * : this is a class that encompasses all deities within a particular belief system or mythology. in the first sentence, " aphrodite and eros are gods, " this class encompasses both aphrodite and eros as members of the divine category. # # # instances instances are specific occurrences or examples of a class. they are particular entities that belong to a class. in the sentences provided : - * * aphrodite * * and * * eros * * : both of these names refer to specific deities in greek mythology. they serve as instances of the class " gods. " in the context of the second sentence, the relationship " aphrodite is a parent of eros " further specifies the connection between these two instances. # # # properties properties describe attributes or characteristics of instances, providing additional information about them. properties can be either qualitative or relational. in the sentences : - * * beautiful * * : this property describes an attribute of aphrodite, indicating her physical or aesthetic appeal. it is a qualitative property, providing insight into the
|
landis and koch ( 1977 ) gave the following table for interpreting κ { \ displaystyle \ kappa } values for a 2 - annotator 2 - class example. this table is however by no means universally accepted. they supplied no evidence to support it, basing it instead on personal opinion. it has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. for example, the kappa is higher when there are fewer categories.
|
What decreases in an environment as the amount of rain increases?
|
[
"solar light",
"water",
"rivers",
"hydration"
] |
Key fact:
as the amount of rain increases in an environment , available sunlight will decrease in that environment
|
A
| 0
|
openbookqa
|
a water pyramid or waterpyramid is a village - scale solar still, designed to distill water using solar energy for remote communities without easy access to clean, fresh water. it provides a means whereby communities can produce potable drinking water from saline, brackish or polluted water sources. history martijn nitzsche, an engineer from the netherlands, founded aqua - aero water systems to develop water treatment and purification systems. in the early 2000s, the company invented the waterpyramid technology. the first waterpyramid was engineered and installed in collaboration with mwh global, an international environmental engineering firm, in the country of gambia in 2005. the waterpyramid desalination systems were awarded the world bank development marketplace award in 2006. description the pyramid stands about 26 feet ( 7. 9 meters ) tall, 100 feet ( 30 meters ) in diameter, and has a conical shape. it is constructed of plastic sheeting, which is inflated using a fan powered by solar energy generated by the pyramid. within the pyramid, temperatures reach up to 167 f ( 75 c ), which evaporates water pumped into thin layer of water inside the cone. distilled water runs down the sides of the pyramid wall and is collected by gutters that feed into a collection tank. when sunshine is replaced by rain, the falling water is also collected around the edge of the base of the cone and stored for use in dry weather. each pyramid can desalinate approximately
|
the national hydrography dataset ( nhd ) is a digital database of surface water features used to make maps. it contains features such as lakes, ponds, streams, rivers, canals, dams, and stream gauges for the united states. description cartographers can link to or download the nhd to use in their computer mapping software. the nhd is used to represent surface water on maps and is also used to perform geospatial analysis. it is a digital vector geospatial dataset designed for use in geographic information systems ( gis ) to analyze the flow of water throughout the nation. the dataset represents over 7. 5 - million miles of streams / rivers and 6. 5 - million lake / ponds. mapping in mapping, the nhd is used with other data themes such as elevation, boundaries, and transportation to produce general reference maps. in geospatial analysis the nhd is used by scientists using gis technology. this takes advantage of a flow direction network that can be processed to trace the flow of water downstream. a rich set of attributes used to identify the water features includes an identifier, the official name of the feature, the length or area of the feature, and metadata describing the source of the data. the identifier is used in an addressing system to link specific information about the water such as water discharge, water quality, and fish population. using the basic water features, flow network, linked information, and other characteristics,
|
this is a list of solar energy topics. a air mass coefficient agrivoltaics artificial photosynthesis b bp solar brightsource energy building - integrated photovoltaics c carbon nanotubes in photovoltaics central solar heating plant community solar farm compact linear fresnel reflector concentrating photovoltaics concentrating solar power crookes radiometer d daylighting horace de saussure desertec drake landing solar community duck curve dye - sensitized solar cell e effect of sun angle on climate energy tower ( downdraft ) euro - solar programme european photovoltaic industry association f feed - in tariff first solar flip flap floating solar ( floatovoltaics ) fresnel reflector charles fritts calvin fuller g geomagnetic storm global dimming greenhouse growth of photovoltaics h halo ( optical phenomenon ) helioseismology heliostat home energy storage i indosolar insolation abram ioffe ise ( fraunhofer institute for solar energy systems ) ivanpah solar power facility j jinko solar l light tube list of photovoltaic power stations list of solar thermal power stations loanpal m magnetic sail auguste mouchout moura photovoltaic power station n nanocrystal solar cell net metering nevada solar one p parabolic reflector parabolic trough passive solar passive solar building design photoelectric effect photovoltaic array photovoltaic system photovoltaic thermal hybrid solar collector
|
What is a riverbank made of?
|
[
"oceans",
"loam",
"rivers",
"dirty clothing"
] |
Key fact:
a riverbank is made of soil
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
A heavier object
|
[
"requires less force to move",
"requires minimal effort to move",
"requires more muscle power to shift",
"requires a light touch to move"
] |
Key fact:
as the mass of an object increases , the force required to push that object will increase
|
C
| 2
|
openbookqa
|
with the help of muscles, joints allow the body to move with relatively little force.
|
skeletal muscles. skeletal muscles enable the body to move.
|
kinaesthetics ( or kinesthetics, in american english ) is the study of body motion, and of the perception ( both conscious and unconscious ) of one's own body motions. kinesthesis is the learning of movements that an individual commonly performs. the individual must repeat the motions that they are trying to learn and perfect many times for this to happen. while kinesthesis may be described as " muscle memory ", muscles do not store memory ; rather, it is the proprioceptors giving the information from muscles to the brain. to do this, the individual must have a sense of the position of their body and how that changes throughout the motor skill they are trying to perform. while performing the motion the body will use receptors in the muscles to transfer information to the brain to tell the brain about what the body is doing. then after completing the same motor skill numerous times, the brain will begin to remember the motion based on the position of the body at a given time. then, after learning the motion, the body will be able to perform the motor skill even when usual senses are inhibited, such as the person closing their eyes. the body will perform the motion based on the information that is stored in the brain from previous attempts at the same movement. this is possible because the brain has formed connections between the location of body parts in space ( the body uses perception to learn where their body is in space ) and the subsequent movements that commonly follow these positions
|
A puppy was uneducated on how to go through a doggy door until
|
[
"the mom did it",
"it read how to",
"it went to school",
"it made a plan"
] |
Key fact:
animals learn some behaviors from watching their parents
|
A
| 0
|
openbookqa
|
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
|
database design is the organization of data according to a database model. the designer determines what data must be stored and how the data elements interrelate. with this information, they can begin to fit the data to the database model. a database management system manages the data accordingly. database design is a process that consists of several steps. conceptual data modeling the first step of database design involves classifying data and identifying interrelationships. the theoretical representation of data is called an ontology or a conceptual data model. determining data to be stored in a majority of cases, the person designing a database is a person with expertise in database design, rather than expertise in the domain from which the data to be stored is drawn e. g. financial information, biological information etc. therefore, the data to be stored in a particular database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of the meaning of the data to be stored within the system. this process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. this is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. data to be stored can be determined by requirement specification. determining data relationships once a database designer is aware of the data which is to be
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
All of the following contain chloroplasts but this
|
[
"rose bushes",
"sea anemones",
"seaweed",
"algae"
] |
Key fact:
a plant cell contains chloroplasts
|
B
| 1
|
openbookqa
|
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
When I hear news of a warm front I make sure to bring
|
[
"game boy",
"clocks",
"guns",
"waterproof appendage covers"
] |
Key fact:
a warm front causes cloudy and rainy weather
|
D
| 3
|
openbookqa
|
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a data pack ( or fact pack ) is a pre - made database that can be fed to a software, such as software agents, game, internet bots or chatterbots, to teach information and facts, which it can later look up. in other words, a data pack can be used to feed minor updates into a system. introduction common data packs may include abbreviations, acronyms, dictionaries, lexicons and technical data, such as country codes, rfcs, filename extensions, tcp and udp port numbers, country calling codes, and so on. data packs may come in formats of csv and sql that can easily be parsed or imported into a database management system. the database may consist of a key - value pair, like an association list. data packs are commonly used within the video game industry to provide minor updates within their games. when a user downloads an update for a game they will be downloading loads of data packs which will contain updates for the game such as minor bug fixes or additional content. an example of a data pack used to update a game can be found on the references. example data pack a data pack datapack definition is similar to a data packet it contains loads of information ( data ) and stores it within a pack where the data can be compressed to reduce its file size. only certain programs can read a data pack therefore when the data is packed it is vital to know whether the receiving program is able to unpack the
|
Where is a portable way of creating light most useful?
|
[
"pitch-black caverns",
"sunny days",
"a bright rooms",
"the sun"
] |
Key fact:
a flashlight emits light
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
Which organism uses xylem for materials transport?
|
[
"saguaro cactus",
"liverwort",
"green algae",
"sphagnum moss"
] |
Key fact:
xylem transports materials through the plant
|
A
| 0
|
openbookqa
|
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
|
florabase is a public access web - based database of the flora of western australia. it provides authoritative scientific information on 12, 978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1, 272 alien taxa ( naturalised weeds ) are also recorded. the system takes data from datasets including the census of western australian plants and the western australian herbarium specimen database of more than 803, 000 vouchered plant collections. it is operated by the western australian herbarium within the department of parks and wildlife. it was established in november 1998. in its distribution guide it uses a combination of ibra version 5. 1 and john stanley beard's botanical provinces. see also declared rare and priority flora list for other online flora databases see list of electronic floras. references external links official website
|
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
|
Which is more likely the result of a big earthquake
|
[
"a mountain",
"a big house",
"a modern airplane.",
"a fancy car"
] |
Key fact:
earthquakes cause rock layers to fold on top of each other
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. the most popular example of a database model is the relational model ( or the sql approximation of relational ), which uses a table - based format. common logical data models for databases include : navigational databases hierarchical database model network model graph database relational model entity – relationship model enhanced entity – relationship model object model document model entity – attribute – value model star schemaan object – relational database combines the two related structures. physical data models include : inverted index flat fileother models include : multidimensional model array model multivalue modelspecialized models are optimized for particular types of data : xml database semantic model content store event store time series model
|
A thermal insulator slows the transfer of what?
|
[
"warmness",
"light",
"energy",
"liquid"
] |
Key fact:
a thermal insulator slows the transfer of heat
|
A
| 0
|
openbookqa
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
The bear in the wild needs to find other animals to feast.
|
[
"they are killers",
"they only eat",
"they never kill",
"they are docile"
] |
Key fact:
lizards eat insects
|
A
| 0
|
openbookqa
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
|
If the part of a tree that contains chloroplasts has flatter surfaces they have more
|
[
"vibrant colors",
"absorbing mass",
"life",
"friends"
] |
Key fact:
as flatness of a leaf increases , the amount of sunlight that leaf can absorb will increase
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
flockdb was an open - source distributed, fault - tolerant graph database for managing wide but shallow network graphs. it was initially used by twitter to store relationships between users, e. g. followings and favorites. flockdb differs from other graph databases, e. g. neo4j in that it was not designed for multi - hop graph traversal but rather for rapid set operations, not unlike the primary use - case for redis sets. flockdb was posted on github shortly after twitter released its gizzard framework, which it used to query the flockdb distributed datastore. the database is licensed under the apache license. twitter no longer supports flockdb. see also gizzard ( scala framework ) references external links official website
|
If your dog is getting noticeably skinnier, you need to
|
[
"increase its food intake",
"play some video games",
"feed it less food",
"Make it fly away"
] |
Key fact:
as the amount of food an animal eats decreases , that organism will become thinner
|
A
| 0
|
openbookqa
|
eating ( also known as consuming ) is the ingestion of food. in biology, this is typically done to provide a heterotrophic organism with energy and nutrients and to allow for growth. animals and other heterotrophs must eat in order to survive carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. for humans, eating is more complex, but is typically an activity of daily living. physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. some individuals may limit their amount of nutritional intake. this may be a result of a lifestyle choice : as part of a diet or as religious fasting. limited consumption may be due to hunger or famine. overconsumption of calories may lead to obesity and the reasons behind it are myriad, however, its prevalence has led some to declare an " obesity epidemic ". eating practices among humans many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. at their highest level of sophistication,
|
food provides building materials for the body. the body needs building materials for growth and repair.
|
carbohydrates, proteins, and lipids contain energy. when your body digests food, it breaks down the molecules of these nutrients. this releases the energy so your body can use it.
|
Which two forces are likely the cause of canyons?
|
[
"water plus fire",
"fire and brimstone",
"water plus gravity",
"H20 and lemmings"
] |
Key fact:
most canyons are formed by flowing rivers through erosion over long periods of time
|
C
| 2
|
openbookqa
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
A tuna would prefer to consume
|
[
"An Apple",
"beef",
"Nemo",
"dogs"
] |
Key fact:
tuna eat fish
|
C
| 2
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
|
the vertebrate genome annotation ( vega ) database is a biological database dedicated to assisting researchers in locating specific areas of the genome and annotating genes or regions of vertebrate genomes. the vega browser is based on ensembl web code and infrastructure and provides a public curation of known vertebrate genes for the scientific community. the vega website is updated frequently to maintain the most current information about vertebrate genomes and attempts to present consistently high - quality annotation of all its published vertebrate genomes or genome regions. vega was developed by the wellcome trust sanger institute and is in close association with other annotation databases, such as zfin ( the zebrafish information network ), the havana group and genbank. manual annotation is currently more accurate at identifying splice variants, pseudogenes, polyadenylation features, non - coding regions and complex gene arrangements than automated methods. history the vertebrate genome annotation ( vega ) database was first made public in 2004 by the wellcome trust sanger institute. it was designed to view manual annotations of human, mouse and zebrafish genomic sequences, and it is the central cache for genome sequencing centers to deposit their annotation of human chromosomes. manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. annotation tools
|
Who can hear sounds?
|
[
"boulders",
"giraffes",
"rocks",
"stone statues"
] |
Key fact:
when sound reaches the ear , that sound can be heard
|
B
| 1
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
|
What best describes the relationship with the moon, Earth, and the sun?
|
[
"the Earth is absorbing sunlight",
"the moon is equidistant from the sun and Earth",
"the moon is a star",
"the sun travels around the Earth"
] |
Key fact:
the moon reflects sunlight towards the Earth
|
A
| 0
|
openbookqa
|
our sun is a star, a sphere of plasma held together by gravity. it is an ordinary star that is extraordinarily important. the sun provides light and heat to our planet. this star supports almost all life on earth.
|
the earth, moon and sun are linked together in space. monthly or daily cycles continually remind us of these links. every month, you can see the moon change. this is due to where it is relative to the sun and earth. in one phase, the moon is brightly illuminated - a full moon. in the opposite phase it is completely dark - a new moon. in between, it is partially lit up. when the moon is in just the right position, it causes an eclipse. the daily tides are another reminder of the moon and sun. they are caused by the pull of the moon and the sun on the earth. tides were discussed in the oceans chapter.
|
sunlight is the portion of the electromagnetic radiation which is emitted by the sun ( i. e. solar radiation ) and received by the earth, in particular the visible light perceptible to the human eye as well as invisible infrared ( typically perceived by humans as warmth ) and ultraviolet ( which can have physiological effects such as sunburn ) lights. however, according to the american meteorological society, there are " conflicting conventions as to whether all three [... ] are referred to as light, or whether that term should only be applied to the visible portion of the spectrum. " upon reaching the earth, sunlight is scattered and filtered through the earth's atmosphere as daylight when the sun is above the horizon. when direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat ( atmospheric ). when blocked by clouds or reflected off other objects, sunlight is diffused. sources estimate a global average of between 164 watts to 340 watts per square meter over a 24 - hour day ; this figure is estimated by nasa to be about a quarter of earth's average total solar irradiance. the ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a requisite for vitamin d3 synthesis and a mutagen. sunlight takes about 8. 3 minutes to reach earth from the surface of the sun. a photon starting at the center of the sun and changing direction every time it encounters a charged particle would take between 10
|
Magma pours out a volcano and what off a cliff
|
[
"drips",
"suspends",
"freezes",
"sticks"
] |
Key fact:
matter in the liquid state drips
|
A
| 0
|
openbookqa
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
A way to keep a cup of coffee warm is to
|
[
"cook it in the oven",
"heat it with a torch",
"put it in the sun",
"use a heated plate"
] |
Key fact:
a hot plate is a source of heat
|
D
| 3
|
openbookqa
|
heat from a wood fire can boil a pot of water. if you put an egg in the pot, you can eat a hard boiled egg in 15 minutes ( cool it down first! ). the energy to cook the egg was stored in the wood. the wood got that energy from the sun when it was part of a tree. the sun generated the energy by nuclear fusion. you started the fire with a match. the head of the match stores energy as chemical energy. that energy lights the wood on fire. the fire burns as long as there is energy in the wood. once the wood has burned up, there is no energy left in it. the fire goes out.
|
over 90 % of the energy we use comes originally from the sun. every day, the sun provides the earth with almost 10, 000 times the amount of energy necessary to meet all of the world ’ s energy needs for that day. our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. plants and many bacteria capture solar energy through photosynthesis. we release the energy stored in plants when we burn wood or plant products such as ethanol. we also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. burning coal and petroleum also releases stored solar energy : these fuels are fossilized plant and animal matter. this chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes — an area called thermochemistry. the concepts introduced in this chapter are widely used in almost all scientific and technical fields. food scientists use them to determine the energy content of foods. biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. the oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate
|
laboratory ovens are a common piece of equipment that can be found in electronics, materials processing, forensic, and research laboratories. these ovens generally provide pinpoint temperature control and uniform temperatures throughout the heating process. the following applications are some of the common uses for laboratory ovens : annealing, die - bond curing, drying or dehydrating, polyimide baking, sterilizing, evaporating. typical sizes are from one cubic foot to 0. 9 cubic metres ( 32 cu ft ). some ovens can reach temperatures that are higher than 300 degrees celsius. these temperatures are then applied from all sides of the oven to provide constant heat to sample. laboratory ovens can be used in numerous different applications and configurations, including clean rooms, forced convection, horizontal airflow, inert atmosphere, natural convection, and pass through. there are many types of laboratory ovens that are used throughout laboratories. standard digital ovens are mainly used for drying and heating processes while providing temperature control and safety. heavy duty ovens are used more in the industrial laboratories and provide testing and drying for biological samples. high temperature ovens are custom built and have additional insulation lining. this is needed for the oven due to its high temperatures that can reach up to 500 degrees celsius. other forms of the laboratory oven include vacuum ovens, forced air convection ovens, and gravity convection ovens. forensic labs use vacuum ovens that have been configured in specific ways to assist in
|
A pulley is used to do what with objects?
|
[
"crush",
"cool",
"increase altitude",
"elevate significance"
] |
Key fact:
a pulley is used for lifting objects
|
C
| 2
|
openbookqa
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
|
A decomposer might thrive more on
|
[
"Magic",
"Jupiter",
"Time Traveling",
"Old turkey"
] |
Key fact:
dead organisms are the source of nutrients for decomposers
|
D
| 3
|
openbookqa
|
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
jet propulsion laboratory development ephemeris ( abbreviated jpl de ( number ), or simply de ( number ) ) designates one of a series of mathematical models of the solar system produced at the jet propulsion laboratory in pasadena, california, for use in spacecraft navigation and astronomy. the models consist of numeric representations of positions, velocities and accelerations of major solar system bodies, tabulated at equally spaced intervals of time, covering a specified span of years. barycentric rectangular coordinates of the sun, eight major planets and pluto, and geocentric coordinates of the moon are tabulated. history there have been many versions of the jpl de, from the 1960s through the present, in support of both robotic and crewed spacecraft missions. available documentation is limited, but we know de69 was announced in 1969 to be the third release of the jpl ephemeris tapes, and was a special purpose, short - duration ephemeris. the then - current jpl export ephemeris was de19. these early releases were distributed on magnetic tape. in the days before personal computers, computers were large and expensive, and numerical integrations such as these were run by large organizations with ample resources. the jpl ephemerides prior to de405 were integrated on a univac mainframe in double precision. for instance, de102, which was created in 1977, took six million steps and ran for nine days on a univa
|
Which organism likely contains chlorophyll?
|
[
"bamboo",
"pandas",
"protozoa",
"humans"
] |
Key fact:
chlorophyll is used for absorbing light energy by plants
|
A
| 0
|
openbookqa
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
the eukaryotic pathogen vector and host database, or veupathdb, is a database of genomics and experimental data related to various eukaryotic pathogens. it was established in 2006 under a national institutes of health program to create bioinformatics resource centers to facilitate research on pathogens that may pose biodefense threats. veupathdb stores data related to its organisms of interest and provides tools for searching through and analyzing the data. it currently consists of 14 component databases, each dedicated to a certain research topic. veupathdb includes : genomics resources covering eukaryotic protozoan parasites host responses to parasite infection ( hostdb ) orthologs ( orthomcl ) clinical study data ( clinepidb ) microbiome data ( microbiomedb ) history veupathdb was established under the nih bioinformatics resource centers program as apidb, a resource meant to cover apicomplexan parasites. apidb originally consisted of component sites cryptodb ( for cryptosporidium ), plasmodb ( for plasmodium ), and toxodb ( for toxoplasma gondii ). as apidb grew to focus on eukaryotic pathogens beyond apicomplexans, the name was changed to eupathdb to support its broadened scope. eupathdb was the result of collaboration between many different parasitologists, including david roos,
|
A light bulb turns on when it receives energy from
|
[
"a cable",
"an oven",
"gasoline",
"a person"
] |
Key fact:
when electricity flows to a light bulb , the light bulb will come on
|
A
| 0
|
openbookqa
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
|
Magma is sourced in volcanoes and
|
[
"is high enough kelvin to melt steel",
"on the desert plains",
"is beneath the aliens",
"can freeze water at all times"
] |
Key fact:
volcanoes are often found under oceans
|
A
| 0
|
openbookqa
|
it has been postulated that surface ice may be responsible for these high luminosity levels, as the silicate rocks that compose most of the surface of mercury have exactly the opposite effect on luminosity. in spite of its proximity to the sun, mercury may have surface ice, since temperatures near the poles are constantly below freezing point : on the polar plains, the temperature does not rise above −106 °c. and craters at mercury's higher latitudes ( discovered by radar surveys from earth as well ) may be deep enough to shield the ice from direct sunlight.
|
when there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of mars. the loss of the atmosphere was accompanied by decreasing temperatures. part of the liquid water inventory sublimed and was transported to the poles, while the rest became trapped in permafrost, a subsurface ice layer. observations on earth and numerical modeling have shown that a crater - forming impact can result in the creation of a long - lasting hydrothermal system when ice is present in the crust.
|
ice is water that is frozen into a solid state, typically forming at or below temperatures of 0 c, 32 f, or 273. 15 k. it occurs naturally on earth, on other planets, in oort cloud objects, and as interstellar ice. as a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish - white color. virtually all of the ice on earth is of a hexagonal crystalline structure denoted as ice ih ( spoken as " ice one h " ). depending on temperature and pressure, at least nineteen phases ( packing geometries ) can exist. the most common phase transition to ice ih occurs when liquid water is cooled below 0 c ( 273. 15 k, 32 f ) at standard atmospheric pressure. when water is cooled rapidly ( quenching ), up to three types of amorphous ice can form. interstellar ice is overwhelmingly low - density amorphous ice ( lda ), which likely makes lda ice the most abundant type in the universe. when cooled slowly, correlated proton tunneling occurs below 253. 15 c ( 20 k, 423. 67 f ) giving rise to macroscopic quantum phenomena. ice is abundant on the earth's surface, particularly in the polar regions and above the snow line, where it can aggregate from snow to
|
how does an animal know to perform certain crucial life actions before exposure to it?
|
[
"it is built into their very being",
"it is taught in school",
"they are trained at a special school",
"they have magical powers"
] |
Key fact:
An example of an instinct is the kangaroo 's ability to crawl into its mother 's pouch to drink milk
|
A
| 0
|
openbookqa
|
their aim is to help students in a specific field of study. to do so, they build up a user model where they store information about abilities, knowledge and needs of the user. the system can now adapt to this user by presenting appropriate exercises and examples and offering hints and help where the user is most likely to need them.
|
their aim is to help students in a specific field of study. to do so, they build up a user model where they store information about abilities, knowledge and needs of the user. the system can now adapt to this user by presenting appropriate exercises and examples and offering hints and help where the user is most likely to need them.
|
learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. the ability to learn is possessed by humans, non - human animals, and some machines ; there is also evidence for some kind of learning in certain plants. some learning is immediate, induced by a single event ( e. g. being burned by a hot stove ), but much skill and knowledge accumulate from repeated experiences. the changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be " lost " from that which cannot be retrieved. human learning starts at birth ( it might even start before ) and continues until death as a consequence of ongoing interactions between people and their environment. the nature and processes involved in learning are studied in many established fields ( including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy ), as well as emerging fields of knowledge ( e. g. with a shared interest in the topic of learning from safety events such as incidents / accidents, or in collaborative learning health systems ). research in such fields has led to the identification of various sorts of learning. for example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. learning may occur consciously or without conscious awareness. learning that an aversive event cannot be avoided or escaped may result in a
|
Winter in the Northern Hemisphere means
|
[
"the Northern Hemisphere is experiencing scorching hot weather",
"the Northern Hemisphere is experiencing daily torrential rain",
"the Southern Hemisphere is experiencing warm sunny days",
"the Southern Hemisphere is experiencing frigid temperatures"
] |
Key fact:
winter in the Northern Hemisphere is during the summer in the Southern Hemisphere
|
C
| 2
|
openbookqa
|
the hemisphere that is tilted away from the sun is cooler because it receives less direct rays. as earth orbits the sun, the northern hemisphere goes from winter to spring, then summer and fall. the southern hemisphere does the opposite from summer to fall to winter to spring. when it is winter in the northern hemisphere, it is summer in the southern hemisphere, and vice versa.
|
the subarctic climate ( also called subpolar climate, or boreal climate ) is a continental climate with long, cold ( often very cold ) winters, and short, warm to cool summers. it is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50n to 70n, poleward of the humid continental climates. like other class d climates, they are rare in the southern hemisphere, only found at some isolated highland elevations. subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. these climates represent kppen climate classification dfc, dwc, dsc, dfd, dwd and dsd. description this type of climate offers some of the most extreme seasonal temperature variations found on the planet : in winter, temperatures can drop to below 50 c ( 58 f ) and in summer, the temperature may exceed 26 c ( 79 f ). however, the summers are short ; no more than three months of the year ( but at least one month ) must have a 24 - hour average temperature of at least 10 c ( 50 f ) to fall into this category of climate, and the coldest month should average below 0 c ( 32 f ) ( or 3 c ( 27 f ) ). record low temperatures can approach 70 c ( 94 f ). with 57 consecutive months when the average temperature is below freezing, all moisture
|
the earth is tilted on its axis ( figure above ). this means that as the earth rotates, one hemisphere has longer days with shorter nights. at the same time the other hemisphere has shorter days and longer nights. for example, in the northern hemisphere summer begins on june 21. on this date, the north pole is pointed directly toward the sun. this is the longest day and shortest night of the year in the northern hemisphere. the south pole is pointed away from the sun. this means that the southern hemisphere experiences its longest night and shortest day ( figure below ).
|
A coal mine is what?
|
[
"a person who mines for coal",
"a rare type of stone",
"a place where coal is processed",
"a mine that is beneath the earth where coal is found"
] |
Key fact:
coal mine is a source of coal under the ground
|
D
| 3
|
openbookqa
|
mining is the extraction of valuable geological materials and minerals from the surface of the earth. mining is required to obtain most materials that cannot be grown through agricultural processes, or feasibly created artificially in a laboratory or factory. ores recovered by mining include metals, coal, oil shale, gemstones, limestone, chalk, dimension stone, rock salt, potash, gravel, and clay. the ore must be a rock or mineral that contains valuable constituent, can be extracted or mined and sold for profit. mining in a wider sense includes extraction of any non - renewable resource such as petroleum, natural gas, or even water. modern mining processes involve prospecting for ore bodies, analysis of the profit potential of a proposed mine, extraction of the desired materials, and final reclamation or restoration of the land after the mine is closed. mining materials are often obtained from ore bodies, lodes, veins, seams, reefs, or placer deposits. the exploitation of these deposits for raw materials is dependent on investment, labor, energy, refining, and transportation cost. mining operations can create a negative environmental impact, both during the mining activity and after the mine has closed. hence, most of the world's nations have passed regulations to decrease the impact ; however, the outsized role of mining in generating business for often rural, remote or economically depressed communities means that governments often fail to fully enforce such regulations. work safety has long been a concern as well, and where enforced, modern practices have significantly
|
a field is a mineral deposit containing a metal or other valuable resources in a cost - competitive concentration. it is usually used in the context of a mineral deposit from which it is convenient to extract its metallic component. the deposits are exploited by mining in the case of solid mineral deposits ( such as iron or coal ) and extraction wells in case of fluids ( such as oil, gas or brines ). description in geology and related fields a deposit is a layer of rock or soil with uniform internal features that distinguish it from adjacent layers. each layer is generally one of a series of parallel layers which lie one above the other, laid one on the other by natural forces. they may extend for hundreds of thousands of square kilometers of the earth's surface. the deposits are usually seen as a different color material groups or different structure exposed in cliffs, canyons, caves and river banks. individual agglomerates may vary in thickness from a few millimeters up to a kilometer or more. each cluster represents a specific type of deposit : flint river, sea sand, coal swamp, sand dunes, lava beds, etc. it can consist of layers of sediment, usually by marine or differentiations of certain minerals during cooling of magma or during metamorphosis of the previous rock. the mineral deposits are generally oxides, silicates and sulfates or metal not commonly concentrated in the earth's crust. the deposits must be machined to extract the metals in question from the waste rock and minerals from
|
coal is a solid hydrocarbon formed from decaying plant material over millions of years.
|
Preparing food at the proper temperatures
|
[
"is too much work and should be avoided",
"eradicates potential illness causing organisms",
"allows bacteria to flourish",
"leaves meat raw and under cooked"
] |
Key fact:
cooking food to proper temperatures protects against food poisoning by killing bacteria and viruses
|
B
| 1
|
openbookqa
|
bacterial contamination of foods can lead to digestive problems, an illness known as food poisoning. raw eggs and undercooked meats commonly carry the bacteria that can cause food poisoning. food poisoning can be prevented by cooking meat thoroughly, which kills most microbes, and washing surfaces that have been in contact with raw meat. washing your hands before and after handling food also helps prevent contamination.
|
bacterial contamination of foods can lead to digestive problems, an illness known as food poisoning. raw eggs and undercooked meats commonly carry the bacteria that can cause food poisoning. food poisoning can be prevented by cooking meat thoroughly and washing surfaces that have been in contact with raw meat. washing your hands before and after handling food also helps prevent contamination.
|
bacteria are responsible for many types of diseases in humans.
|
What requires nutrients for survival?
|
[
"sand",
"plastic",
"metal",
"an anaconda"
] |
Key fact:
an animal requires nutrients for survival
|
D
| 3
|
openbookqa
|
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
|
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
|
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
|
Sound can be used for communication by
|
[
"creatures",
"plants",
"water",
"planets"
] |
Key fact:
sound can be used for communication by animals
|
A
| 0
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
|
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
|
Other than sight bloodhounds can find a meal by
|
[
"social media",
"their phone",
"the internet",
"stench"
] |
Key fact:
smell is used for finding food by some animals
|
D
| 3
|
openbookqa
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
|
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
|
In a warm room, it is likely that the source of heat is
|
[
"a series of metal pipes along a wall",
"a small ceiling fan",
"a stove which is turned off",
"a pile of boxes"
] |
Key fact:
a radiator is a source of heat
|
A
| 0
|
openbookqa
|
in engineering and computing, " stovepipe system " is a pejorative term for a system that has the potential to share data or functionality with other systems but which does not do so. the term evokes the image of stovepipes rising above buildings, each functioning individually. a simple example of a stovepipe system is one that implements its own user ids and passwords, instead of relying on a common user id and password shared with other systems. stovepipes are systems procured and developed to solve a specific problem, characterized by a limited focus and functionality, and containing data that cannot be easily shared with other systems. a stovepipe system is generally considered an example of an anti - pattern, particularly found in legacy systems. this is due to the lack of code reuse, and resulting software brittleness due to potentially general functions only being used on limited input. however, in certain cases stovepipe systems are considered appropriate, due to benefits from vertical integration and avoiding dependency hell. for example, the microsoft excel team has avoided dependencies and even maintained its own c compiler, which helped it to ship on time, have high - quality code, and generate small, cross - platform code. see also not invented here reinventing the wheel stovepipe ( organisation ) = = references = =
|
a pipe is a tubular section or hollow cylinder, usually but not necessarily of circular cross - section, used mainly to convey substances which can flow liquids and gases ( fluids ), slurries, powders and masses of small solids. it can also be used for structural applications ; a hollow pipe is far stiffer per unit weight than the solid members. in common usage the words pipe and tube are usually interchangeable, but in industry and engineering, the terms are uniquely defined. depending on the applicable standard to which it is manufactured, pipe is generally specified by a nominal diameter with a constant outside diameter ( od ) and a schedule that defines the thickness. tube is most often specified by the od and wall thickness, but may be specified by any two of od, inside diameter ( id ), and wall thickness. pipe is generally manufactured to one of several international and national industrial standards. while similar standards exist for specific industry application tubing, tube is often made to custom sizes and a broader range of diameters and tolerances. many industrial and government standards exist for the production of pipe and tubing. the term " tube " is also commonly applied to non - cylindrical sections, i. e., square or rectangular tubing. in general, " pipe " is the more common term in most of the world, whereas " tube " is more widely used in the united states. both " pipe " and " tube " imply a level of rigidity and permanence, whereas
|
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14