question
string
options
list
rationale
string
label
string
label_idx
int64
dataset
string
chunk1
string
chunk2
string
chunk3
string
What purpose does a plant light serve?
[ "Comfort them", "Mimic sunlight", "Keep plants warm", "Protect from bugs" ]
Key fact: a plant light is used for help plants by mimicking sunlight
B
1
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
When will the humidity increase in relation to bodies of water?
[ "when you get closer", "when you are shooting", "when you get farther", "when you are laughing" ]
Key fact: as distance from water decreases , humidity will increase
A
0
openbookqa
laughter is a pleasant physical reaction and emotion consisting usually of rhythmical, usually audible contractions of the diaphragm and other parts of the respiratory system. it is a response to certain external or internal stimuli. laughter can rise from such activities as being tickled, or from humorous stories, imagery, videos or thoughts. most commonly, it is considered an auditory expression of a number of positive emotional states, such as joy, mirth, happiness or relief. on some occasions, however, it may be caused by contrary emotional states such as embarrassment, surprise, or confusion such as nervous laughter or courtesy laugh. age, gender, education, language and culture are all indicators as to whether a person will experience laughter in a given situation. other than humans, some other species of primate ( chimpanzees, gorillas and orangutans ) show laughter - like vocalizations in response to physical contact such as wrestling, play chasing or tickling. laughter is a part of human behavior regulated by the brain, helping humans clarify their intentions in social interaction and providing an emotional context to conversations. laughter is used as a signal for being part of a groupit signals acceptance and positive interactions with others. laughter is sometimes seen as contagious and the laughter of one person can itself provoke laughter from others as a positive feedback. the study of humor and laughter, and its psychological and physiological effects on the human body, is called gelotology. nature laughter might be thought of as an audible
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
development team can understand the priority of the request and the context surrounding it. here ’ s the user story derived from the feedback : * * user story : * * " as a visually impaired user of the jaas app, i want the jokes to be displayed as text instead of images, so that i can use the voice - over feature on my phone to have the jokes read aloud to me, allowing me to share humor with others and improve my overall mood. " this user story succinctly encapsulates the user's identity, their specific need for accessible content, and the emotional benefit they hope to achieve.
What do hawks eat?
[ "lizard droppings", "bearded dragons", "cows", "grass" ]
Key fact: hawks eat lizards
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
tassdb ( tandem splice site database ) is a database of tandem splice sites of eight species see also alternative splicing references external links https : / / archive. today / 20070106023527 / http : / / helios. informatik. uni - freiburg. de / tassdb /.
How many odorant receptors do honey bees have?
[ "4", "270", "170", "70" ]
Key fact: bees convert nectar into honey
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Instinctive behavior may be viewed when observing
[ "freshly hatched turtles seeking water", "birds being trained to play games", "cats being fed salami", "dogs riding in cars" ]
Key fact: An example of an instinctive behavior is a baby bird pecking at its shell to hatch
A
0
openbookqa
beginning in 2000, scientists from the national oceanic and atmospheric administration, stanford university, and the university of california, santa cruz combined to form topp. as part of topp, researchers attach satellite tags to elephant seals, white sharks, giant leatherback turtles, bluefin tuna, swordfish, and other marine animals. the tags collect information, such as how deep each animal dives, the levels of ambient light ( to help determine an animal ’ s location ), and interior and exterior body temperature. some tags also collect information about the temperature, salinity, and depth of the water surrounding an animal to help scientists identify ocean currents. the tags send the data to a satellite, which in turn sends the data the scientists. they use this information to create maps of migration patterns and discover new information about different marine ecosystems. the information collected by topp offers rare insights into the lives of marine animals. without topp, that information would otherwise remain unknown. with topp, scientists are developing a working knowledge of the particular migration routes animals take, as well as the locations of popular breeding grounds and the environmental dangers faced by different species. topp has shed light on how we can better protect the leatherback turtle and other endangered species.
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
for example, loggerhead sea turtle hatchlings are commonly seen exhibiting symmetrical gait on sand, whereas, leatherback sea turtles employ the asymmetrical gait while on land. notably, leatherbacks employ their front ( pelvic ) flippers more during forward terrestrial locomotion. sea turtles can be seen nesting on subtropical and tropical beaches all around the world and exhibit such behavior such as arribada ( collective animal behavior ). this is a phenomenon seen in kemp's ridley turtles which emerge all at once in one night only onto the beach to lay their nests.
The topography of our only natural satellite's surface is
[ "smooth", "made of cheese", "full of gold", "mountainous" ]
Key fact: the moon 's surface contains highlands
D
3
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
What is a vehicle for the flow of electricity?
[ "a metal sword", "a wooden chair", "a plastic ring", "a dry towel" ]
Key fact: An electrical conductor is a vehicle for the flow of electricity
A
0
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a data item describes an atomic state of a particular object concerning a specific property at a certain time point. a collection of data items for the same object at the same time forms an object instance ( or table row ). any type of complex information can be broken down to elementary data items ( atomic state ). data items are identified by object ( o ), property ( p ) and time ( t ), while the value ( v ) is a function of o, p and t : v = f ( o, p, t ). values typically are represented by symbols like numbers, texts, images, sounds or videos. values are not necessarily atomic. a value's complexity depends on the complexity of the property and time component. when looking at databases or xml files, the object is usually identified by an object name or other type of object identifier, which is part of the " data ". properties are defined as columns ( table row ), properties ( object instance ) or tags ( xml ). often, time is not explicitly expressed and is an attribute applying to the complete data set. other data collections provide time on the instance level ( time series ), column level, or even attribute / property level. = = references = =
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
A plastic bag is filled with milk and is placed in a chest. The chest has a device which takes all of the warm air away, so eventually, the milk will
[ "quake", "be seared", "be solid", "melt" ]
Key fact: freezing point means temperature below which a liquid freezes
C
2
openbookqa
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
Solar energy called sunlight originates from
[ "jupiter", "center of universe", "our celestial star", "deep space" ]
Key fact: the sun is the source of solar energy called sunlight
C
2
openbookqa
today we know that we have eight planets, five dwarf planets, over 165 moons, and many, many asteroids and other small objects in our solar system. we also know that the sun is not the center of the universe. but it is the center of the solar system.
astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, machine learning, informatics, and information / communications technologies. the field is closely related to astrostatistics. data - driven astronomy ( dda ) refers to the use of data science in astronomy. several outputs of telescopic observations and sky surveys are taken into consideration and approaches related to data mining and big data management are used to analyze, filter, and normalize the data set that are further used for making classifications, predictions, and anomaly detections by advanced statistical approaches, digital image processing and machine learning. the output of these processes is used by astronomers and space scientists to study and identify patterns, anomalies, and movements in outer space and conclude theories and discoveries in the cosmos. background astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data - oriented astronomy. early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical virtual observatory initiatives. further development of the field, along with astronomy community endorsement, was presented to the national research council ( united states ) in 2009 in the astroinformatics " state of the profession " position paper for the 2010 astronomy and astrophysics decadal survey. that position paper provided the basis for the subsequent more detailed
jupiter is the largest planet in our solar system.
Which would reach the other side of a room the fastest?
[ "the world's fastest bird", "the world's fastest sprinter", "the light from a flashlight", "an extremely loud audio signal" ]
Key fact: a flashlight emits light
C
2
openbookqa
a pulse train is a sequence of discrete pulses occurring in a signal over time. typically, these pulses are of similar shape and are evenly spaced in time, forming a periodic or near - periodic sequence. pulse trains outputs are widely used in tachometers, speedometers and encoders. such pulse sequences appear in multiple fields of technology and engineering, where a pulse train often denotes a series of electrical pulses generated by a sensor ( for example, teeth of a rotating gear inducing pulses in a pickup sensor ), or pulse train is connected to signal processing and computer graphics, where a pulse train is treated as a mathematical signal or function that repeats with a fixed period. definition and mechanism several key parameters define the characteristics of a pulse train. the pulse duration, often denoted by the greek letter tau ( ) or as t1, represents the length of time for which each pulse is active, typically at its high level. following each pulse is a period of inactivity known as the pulse separation, indicated as t2. the sum of the pulse duration and the pulse separation constitutes the period ( t ) of the wave, representing one complete cycle ( t = t1 + t2 ). a crucial parameter derived from these is the duty cycle ( d ), which is the ratio of the pulse duration to the total period ( d = / t ), often expressed as a percentage. notably, a pulse train with a 50 % duty cycle, where the pulse duration is equal to the pulse
zipcodezoo was a free, online encyclopedia intended to document all living species and infraspecies known to science. it was compiled from existing databases. it offered one page for each living species, supplementing text with video, sound, and images where available. zipcodezoo was integrated into an app called lookup life. as of 2019 the site no longer works. zipcodezoo was an online database that collected the natural history, classification, species characteristics, conservation biology, and distribution information of thousands of species and infraspecies. it included over 800, 000 photographs, 50, 000 videos, 160, 000 sound clips, and 3. 2 million maps describing nearly 3. 2 million species and infraspecies. its content is now only available on the internet archive the site and its sister site lookup. life included a number of specialized search functions, such as identifying a bird species from its color, shape and other traits, including where it was seen ; or generating a list of plants or animals likely to be found in or near a specific location ( a zipcode, state, country, latitude / longitude, etc. ). the searches could be restricted to specific taxa, or broad categories like reptiles or fish. a sound trainer could play multiple bird song recordings simultaneously. zipcodezoo drew on the catalogue of life for its basic species list, the global biodiversity information facility for its maps, flickr for many of its photos, youtube for videos, xeno
in radio astronomy, perytons are short man - made radio signals of a few milliseconds resembling fast radio bursts ( frb ). a peryton differs from radio frequency interference by the fact that it is a pulse of several to tens of millisecond duration which sweeps down in frequency. they are further verified by the fact that they occur at the same time in many beams, indicating that they come from earth, whereas frbs occur in only one or two of the beams, indicating that they are of galactic origin. the first signal occurred in 2001 but was not discovered until 2007. first detected at the parkes observatory, data gathered by the telescope also suggested the source was local. the signals were found to be caused by premature opening of a microwave oven door nearby. naming due to the unclear origin of the detections at first, the radio signals were named after the peryton, a mythical winged stag that casts the shadow of a man. this interprets into " strangeness made by man ". this name was chosen for these signals because they are man - made but have characteristics that mimic the natural phenomenon of frbs. the name was coined by sarah burke - spolaor et al. in 2011. detection perytons were observed at the parkes observatory and bleien radio observatory. after the discovery of the first frb in 2007, dr. burke searched through old telescope data looking for similar signals. she found what she was looking for,
If a lizard is unable to obtain sufficient nutrients in a period of time, the resulting effect may be
[ "ceased existence", "moving slowly", "feeling cold", "finding food" ]
Key fact: an animal needs to eat food for nutrients
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
arangodb is a graph database system developed by arangodb inc. arangodb is a multi - model database system since it supports three data models ( graphs, json documents, key / value ) with one database core and a unified query language aql ( arangodb query language ). aql is mainly a declarative language and allows the combination of different data access patterns in a single query. arangodb is a nosql database system but aql is similar in many ways to sql, it uses rocksdb as a storage engine. history arangodb gmbh was founded in 2014 by claudius weinberger and frank celler. they originally called the database system a versatile object container ", or avoc for short, leading them to call the database avocadodb. later, they changed the name to arangodb. the word " arango " refers to a little - known avocado variety grown in cuba. in january 2017 arangodb raised a seed round investment of 4. 2 million euros led by target partners. in march 2019 arangodb raised 10 million dollars in series a funding led by bow capital. in october 2021 arangodb raised 27. 8 million dollars in series b funding led by iris capital. release history features json : arangodb uses json as a default storage format, but internally it uses arangodb velocypack a fast and compact binary format for serialization and storage. arango
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
Which would likely protect a rabbit?
[ "surfing", "above ground nesting", "swimming", "living under ground" ]
Key fact: eagles eat rabbits
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the interim register of marine and nonmarine genera ( irmng ) is a taxonomic database which attempts to cover published genus names for all domains of life ( also including subgenera in zoology ), from 1758 in zoology ( 1753 in botany ) up to the present, arranged in a single, internally consistent taxonomic hierarchy, for the benefit of biodiversity informatics initiatives plus general users of biodiversity ( taxonomic ) information. in addition to containing over 500, 000 published genus name instances as at july 2024 ( also including subgeneric names in zoology ), the database holds over 1. 7 million species names ( 1. 3 million listed as " accepted " ), although this component of the data is not maintained in as current or complete state as the genus - level holdings. irmng can be queried online for access to the latest version of the dataset and is also made available as periodic snapshots or data dumps for import / upload into other systems as desired. the database was commenced in 2006 at the then csiro division of marine and atmospheric research in australia and, since 2016, has been hosted at the flanders marine institute ( vliz ) in belgium. description irmng contains scientific names ( only ) of the genera ( plus zoological subgenera, see below ), a subset of species, and principal higher ranks of most plants, animals and other kingdoms, both living and extinct, within a standardized taxonomic hierarchy, with associated machine - readable information on habitat (
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
Phase changing occurs when
[ "water is poured into a glass", "cake is left to cool on the counter", "jello mix is refrigerated", "turkey is sliced into pieces" ]
Key fact: a phase change is when matter changes from one state into another state
C
2
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
food is chemical energy stored in organic molecules.
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
A reason there is so much debris and damage during tornadoes is due to rocks that are getting
[ "broken apart", "eaten", "evaporated", "stolen" ]
Key fact: breaking apart rocks can cause debris
A
0
openbookqa
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
in august 2022, a hacker stole a copy of a customer database, and some copies of the customers'password vaults. the stolen information includes names, email addresses, billing addresses, partial credit cards and website urls. some of the data in the vaults was unencrypted, while other data was encrypted with users'master passwords. the security of each user's encrypted data depends on the strength of the user's master password, or whether the password had previously been leaked, and the number of rounds of encryption used.
Stems are to flowers as
[ "dogs are to cats", "cows are to cud", "bees are to pollen", "silos are to grains" ]
Key fact: a stem is used to store water by some plants
D
3
openbookqa
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
the levels of classification taxonomy ( which literally means “ arrangement law ” ) is the science of naming and grouping species to construct an internationally shared classification system. the taxonomic classification system ( also called the linnaean system after its inventor, carl linnaeus, a swedish naturalist ) uses a hierarchical model. a hierarchical system has levels and each group at one of the levels includes groups at the next lowest level, so that at the lowest level each member belongs to a series of nested groups. an analogy is the nested series of directories on the main disk drive of a computer. for example, in the most inclusive grouping, scientists divide organisms into three domains : bacteria, archaea, and eukarya. within each domain is a second level called a kingdom. each domain contains several kingdoms. within kingdoms, the subsequent categories of increasing specificity are : phylum, class, order, family, genus, and species. as an example, the classification levels for the domestic dog are shown in figure 12. 3. the group at each level is called a taxon ( plural : taxa ). in other words, for the dog, carnivora is the taxon at the order level, canidae is the taxon at the family level, and so forth. organisms also have a common name that people typically use, such as domestic dog, or wolf. each taxon name is capitalized except for species, and the genus and species names are italicized. scientists refer to an organism by its
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
Gravitational force never affects
[ "balloons", "stars", "sunshine", "air" ]
Key fact: gravitational force causes objects that have mass to be pulled down on a planet
C
2
openbookqa
weather stations collect data on land and sea. weather balloons, satellites, and radar collect data in the atmosphere.
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
a balloon is a bag filled with a gas with a lower density than the surrounding air to provide buoyancy. the gas may be hot air, hydrogen, helium or, in the past, coal gas. the use of buoyant gases is unknown in the natural world.
The sidewalk next to a house having a crack in it and having vegetation growing from it is considered?
[ "insects", "weathering", "erosion", "lava" ]
Key fact: soil erosion is when wind move soil from environments
B
1
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
It's June in Australia so I should make sure to wear
[ "mittens", "shorts", "flip flops", "a bathing suit" ]
Key fact: June is during the winter in the southern hemisphere
A
0
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Reproduction means a development of genes like
[ "time travel", "a personality", "magical powers", "hairline" ]
Key fact: living things can all reproduce
D
3
openbookqa
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
in psychology and cognitive science, a schema ( pl. : schemata or schemas ) describes a pattern of thought or behavior that organizes categories of information and the relationships among them. it can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information, such as a mental schema or conceptual model. schemata influence attention and the absorption of new knowledge : people are more likely to notice things that fit into their schema, while re - interpreting contradictions to the schema as exceptions or distorting them to fit. schemata have a tendency to remain unchanged, even in the face of contradictory information. schemata can help in understanding the world and the rapidly changing environment. people can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required. people use schemata to organize current knowledge and provide a framework for future understanding. examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. in piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world. history " schema " comes from the greek word schmat or schma, meaning
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
Which of the following statements is true
[ "biofuel releases CO2 but is better than oil", "biofuel is without flaws", "biofuel can single-handedly end CO2 production", "biofuel is perfect for the environment" ]
Key fact: biofuel releases carbon dioxide into the atmosphere
A
0
openbookqa
biofuel is a fuel that is produced over a short time span from biomass, rather than by the very slow natural processes involved in the formation of fossil fuels such as oil. biofuel can be produced from plants or from agricultural, domestic or industrial bio waste. biofuels are mostly used for transportation, but can also be used for heating and electricity. : 173 biofuels ( and bio energy in general ) are regarded as a renewable energy source. : 11 the use of biofuel has been subject to criticism regarding the " food vs fuel " debate, varied assessments of their sustainability, and ongoing deforestation and biodiversity loss as a result of biofuel production. in general, biofuels emit fewer greenhouse gas emissions when burned in an engine and are generally considered carbon - neutral fuels as the carbon emitted has been captured from the atmosphere by the crops used in production. however, life - cycle assessments of biofuels have shown large emissions associated with the potential land - use change required to produce additional biofuel feedstocks. the outcomes of lifecycle assessments ( lcas ) for biofuels are highly situational and dependent on many factors including the type of feedstock, production routes, data variations, and methodological choices. estimates about the climate impact from biofuels vary widely based on the methodology and exact situation examined. therefore, the climate change mitigation potential of biofuel varies considerably : in some scenarios emission levels are comparable to fossil fuels,
biofuels are useful because they are liquid. biofuels can go into a gas tank unlike many other types of alternative energy.
bioenergy with carbon capture and storage ( beccs ) is the process of extracting bioenergy from biomass and capturing and storing the carbon dioxide ( co2 ) that is produced. greenhouse gas emissions from bioenergy can be low because when vegetation is harvested for bioenergy, new vegetation can grow that will absorb co2 from the air through photosynthesis. after the biomass is harvested, energy ( " bioenergy " ) is extracted in useful forms ( electricity, heat, biofuels, etc. ) as the biomass is utilized through combustion, fermentation, pyrolysis or other conversion methods. using bioenergy releases co2. in beccs, some of the co2 is captured before it enters the atmosphere, and stored underground using carbon capture and storage technology. under some conditions, beccs can remove carbon dioxide from the atmosphere. the potential range of negative emissions from beccs was estimated to be zero to 22 gigatonnes per year. as of 2024, there are large - scale 3 beccs projects operating in the world. wide deployment of beccs is constrained by cost and availability of biomass. : 10 since biomass production is land - intensive, deployment of beccs can pose major risks to food production, human rights, and biodiversity. negative emission the main appeal of beccs is in its ability to result in negative emissions of co2. the capture of carbon dioxide from bioenergy sources effectively removes
Baking soda can react chemically with what?
[ "oxidized alcohol", "sunlight", "dirt", "wind" ]
Key fact: baking soda can react chemically with vinegar
A
0
openbookqa
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
The wolf raises his family near a
[ "den", "field", "tree", "mother." ]
Key fact: animals live and feed near their habitats
A
0
openbookqa
a hierarchical database model is a data model in which the data is organized into a tree - like structure. the data are stored as records which is a collection of one or more fields. each field contains a single value, and the collection of fields in a record defines its type. one type of field is the link, which connects a given record to associated records. using links, records link to other records, and to other records, forming a tree. an example is a " customer " record that has links to that customer's " orders ", which in turn link to " line _ items ". the hierarchical database model mandates that each child record has only one parent, whereas each parent record can have zero or more child records. the network model extends the hierarchical by allowing multiple parents and children. in order to retrieve data from these databases, the whole tree needs to be traversed starting from the root node. both models were well suited to data that was normally stored on tape drives, which had to move the tape from end to end in order to retrieve data. when the relational database model emerged, one criticism of hierarchical database models was their close dependence on application - specific implementation. this limitation, along with the relational model's ease of use, contributed to the popularity of relational databases, despite their initially lower performance in comparison with the existing network and hierarchical models. history the hierarchical structure was developed by ibm in the 1960s and used in early mainframe dbms. records'relationships form a tree
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
treefam ( tree families database ) is a database of phylogenetic trees of animal genes. it aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. treefam defines a gene family as a group of genes that evolved after the speciation of single - metazoan animals. it also tries to include outgroup genes like yeast ( s. cerevisiae and s. pombe ) and plant ( a. thaliana ) to reveal these distant members. treefam is also an ortholog database. unlike other pairwise alignment based ones, treefam infers orthologs by means of gene trees. it fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. treefam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. the basic elements of treefam are gene families that can be divided into two parts : treefam - a and treefam - b families. treefam - b families are automatically created. they might contain errors given complex phylogenies. treefam - a families are manually curated from treefam - b ones. family names and node names are assigned at the same time. the ultimate goal of treefam is to present a curated resource for all the families. treefa
condensation is a stage in the water cycle process when
[ "ice splashes water in my glass", "a raindrop lands in my eye", "a moist film is on my spectacles", "sweat falls into my eyes" ]
Key fact: condensation is a stage in the water cycle process
C
2
openbookqa
precipitation is water that falls from clouds. it may fall as liquid or frozen water. types of frozen precipitation include snow, sleet, freezing rain, and hail.
the atmosphere is an exchange pool for water. ice masses, aquifers, and the deep ocean are water reservoirs.
wetware is a term drawn from the computer - related idea of hardware or software, but applied to biological life forms. usage the prefix " wet " is a reference to the water found in living creatures. wetware is used to describe the elements equivalent to hardware and software found in a person, especially the central nervous system ( cns ) and the human mind. the term wetware finds use in works of fiction, in scholarly publications and in popularizations. the " hardware " component of wetware concerns the bioelectric and biochemical properties of the cns, specifically the brain. if the sequence of impulses traveling across the various neurons are thought of symbolically as software, then the physical neurons would be the hardware. the amalgamated interaction of this software and hardware is manifested through continuously changing physical connections, and chemical and electrical influences that spread across the body. the process by which the mind and brain interact to produce the collection of experiences that we define as self - awareness is in question. history although the exact definition has shifted over time, the term wetware and its fundamental reference to " the physical mind " has been around at least since the mid - 1950s. mostly used in relatively obscure articles and papers, it was not until the heyday of cyberpunk, however, that the term found broad adoption. among the first uses of the term in popular culture was the bruce sterling novel schismatrix ( 1985 ) and the michael swanwick novel vacuum flowers ( 1987 ). rudy ru
You couldn't discover the shape of an object if you had
[ "nose plug", "tape over mouth", "ear plugs", "hands behind back" ]
Key fact: the shape of an object can be discovered through feeling that object
D
3
openbookqa
ear - eeg is a method for measuring dynamics of brain activity through the minute voltage changes observable on the skin, typically by placing electrodes on the scalp. in ear - eeg, the electrodes are exclusively placed in or around the outer ear, resulting in both a much greater invisibility and wearer mobility compared to full scalp electroencephalography ( eeg ), but also significantly reduced signal amplitude, as well as reduction in the number of brain regions in which activity can be measured. it may broadly be partitioned into two groups : those using electrode positions exclusively within the concha and ear canal, and those also placing electrodes close to the ear, usually hidden behind the ear lobe. generally speaking, the first type will be the most invisible, but also offer the most challenging ( noisy ) signal. ear - eeg is a good candidate for inclusion in a hearable device, however, due to the high complexity of ear - eeg sensors, this has not yet been done. history ear - eeg was first described in a patent application, and subsequently in other publications. since then, it has grown to be an endeavor spread across multiple research groups and collaborations, as well as private companies. notable incarnations of the technology are the ceegrid ( see picture to the right ) and the custom 3d - printed ear plugs from neurotechnology lab ( see picture above ). attempts at creating in - ear generic earpieces are also known
in anatomy, the eustachian tube ( ), also called the auditory tube or pharyngotympanic tube, is a tube that links the nasopharynx to the middle ear, of which it is also a part. in adult humans, the eustachian tube is approximately 35 mm ( 1. 4 in ) long and 3 mm ( 0. 12 in ) in diameter. it is named after the sixteenth - century italian anatomist bartolomeo eustachi. in humans and other tetrapods, both the middle ear and the ear canal are normally filled with air.
the 1020 system or international 1020 system is an internationally recognized method to describe and apply the location of scalp electrodes in the context of an eeg exam, polysomnograph sleep study, or voluntary lab research. this method was developed to maintain standardized testing methods ensuring that a subject's study outcomes ( clinical or research ) could be compiled, reproduced, and effectively analyzed and compared using the scientific method. the system is based on the relationship between the location of an electrode and the underlying area of the brain, specifically the cerebral cortex. across all phases of consciousness, brains produce different, objectively recognizable and distinguishable electrical patterns, which can be detected by electrodes on the skin. these patterns vary, and are affected by multiple extrinsic factors, including age, prescription drugs, somatic diagnoses, history of neurologic insults / injury / trauma, and substance abuse. the " 10 " and " 20 " refer to the fact that the actual distances between adjacent electrodes are either 10 % or 20 % of the total frontback or rightleft distance of the skull. for example, a measurement is taken across the top of the head, from the nasion to inion. most other common measurements ('landmarking methods') start at one ear and end at the other, normally over the top of the head. specific anatomical locations of the ear used include the tragus, the auricle and the mastoid. electrode labeling each electrode placement
A skunk wards off predators with
[ "a noxious spray", "bad taste", "a powerful nose", "smelling good" ]
Key fact: most animals avoid bad odors
A
0
openbookqa
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
an aroma compound, also known as an odorant, aroma, fragrance, flavoring or flavor, is a chemical compound that has a smell or odor. for an individual chemical or class of chemical compounds to impart a smell or fragrance, it must be sufficiently volatile for transmission via the air to the olfactory system in the upper part of the nose. as examples, various fragrant fruits have diverse aroma compounds, particularly strawberries which are commercially cultivated to have appealing aromas, and contain several hundred aroma compounds. generally, molecules meeting this specification have molecular weights of less than 310. flavors affect both the sense of taste and smell, whereas fragrances affect only smell. flavors tend to be naturally occurring, and the term fragrances may also apply to synthetic compounds, such as those used in cosmetics. aroma compounds can naturally be found in various foods, such as fruits and their peels, wine, spices, floral scent, perfumes, fragrance oils, and essential oils. for example, many form biochemically during the ripening of fruits and other crops. wines have more than 100 aromas that form as byproducts of fermentation. also, many of the aroma compounds play a significant role in the production of compounds used in the food service industry to flavor, improve, and generally increase the appeal of their products. an odorizer may add a detectable odor to a dangerous odorless substance, like propane, natural gas, or hydrogen, as a safety measure. aroma
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
A piglet was accidentally stepped on. In order to heal up, the piglet is offered
[ "a new toy", "a pet puppy", "slop", "a warm bath" ]
Key fact: an animal requires nutrients to grow and heal
C
2
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
the new service is less like a game. the user picks a category and they are shown a set of images. they go through each image and state whether it has been correctly categorised.
the new service is less like a game. the user picks a category and they are shown a set of images. they go through each image and state whether it has been correctly categorised.
To see an example of xylem in work
[ "organize flowers in a bouquet", "put a rose in food dye", "pull a plant from the root", "poor water on a plant" ]
Key fact: xylem carries water from the roots of a plant to the leaves of a plant
B
1
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
plants are complex organisms with tissues organized into organs.
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
A person who has a job of making discoveries also
[ "watches", "fights", "slaughters", "explodes" ]
Key fact: scientists make observations
A
0
openbookqa
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
in a database, a view is the result set of a stored query that presents a limited perspective of the database to a user. this pre - established query command is kept in the data dictionary. unlike ordinary base tables in a relational database, a view does not form part of the physical schema : as a result set, it is a virtual table computed or collated dynamically from data in the database when access to that view is requested. changes applied to the data in a relevant underlying table are reflected in the data shown in subsequent invocations of the view. views can provide advantages over tables : views can represent a subset of the data contained in a table. consequently, a view can limit the degree of exposure of the underlying tables to the outer world : a given user may have permission to query the view, while denied access to the rest of the base table. views can join and simplify multiple tables into a single virtual table. views can act as aggregated tables, where the database engine aggregates data ( sum, average, etc. ) and presents the calculated results as part of the data. views can hide the complexity of data. for example, a view could appear as sales2020 or sales2021, transparently partitioning the actual underlying table. views take very little space to store ; the database contains only the definition of a view, not a copy of all the data that it presents. views structure data in a way that classes of users find natural and intuitive.
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
which of these need to be present for a tectonic plate movement
[ "a pool of molten lava", "an ocean with fish", "a river flowing north", "a crack in the core" ]
Key fact: a tectonic plate moves along a fault line
D
3
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
minerals form as magma or lava cools.
igneous rocks form from cooled magma or lava.
A lake with two buckets of ice water poured into it each day will likely
[ "shrink", "dehydrate", "swell", "drain" ]
Key fact: as the amount of water in a body of water increases , the water levels will increase
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a data stream management system ( dsms ) is a computer software system to manage continuous data streams. it is similar to a database management system ( dbms ), which is, however, designed for static data in conventional databases. a dbms also offers a flexible query processing so that the information needed can be expressed using queries. however, in contrast to a dbms, a dsms executes a continuous query that is not only performed once, but is permanently installed. therefore, the query is continuously executed until it is explicitly uninstalled. since most dsms are data - driven, a continuous query produces new results as long as new data arrive at the system. this basic concept is similar to complex event processing so that both technologies are partially coalescing. functional principle one important feature of a dsms is the possibility to handle potentially infinite and rapidly changing data streams by offering flexible processing at the same time, although there are only limited resources such as main memory. the following table provides various principles of dsms and compares them to traditional dbms. processing and streaming models one of the biggest challenges for a dsms is to handle potentially infinite data streams using a fixed amount of memory and no random access to the data. there are different approaches to limit the amount of data in one pass, which can be divided into two classes. for the one hand, there are compression techniques that try to summarize the data and for the other hand there are window techniques that try to portion
denormalization is a strategy used on a previously - normalized database to increase performance. in computing, denormalization is the process of trying to improve the read performance of a database, at the expense of losing some write performance, by adding redundant copies of data or by grouping data. it is often motivated by performance or scalability in relational database software needing to carry out very large numbers of read operations. denormalization differs from the unnormalized form in that denormalization benefits can only be fully realized on a data model that is otherwise normalized. implementation a normalized design will often " store " different but related pieces of information in separate logical tables ( called relations ). if these relations are stored physically as separate disk files, completing a database query that draws information from several relations ( a join operation ) can be slow. if many relations are joined, it may be prohibitively slow. there are two strategies for dealing with this by denormalization : " dbms support " : the database management system stores redundant copies in the background, which are kept consistent by the dbms software " dba implementation " : the database administrator ( or designer ) design around the problem by denormalizing the logical data design dbms support with this approach, database administrators can keep the logical design normalized, but allow the database management system ( dbms ) to store additional redundant information on disk to optimize query response. in this case it is the db
Which is likely to boil?
[ "a cup of dirt", "a cup of tacos", "a cup of plasma", "a cup of Earl Grey" ]
Key fact: boiling is when liquids are heated above their boiling point
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the flat ( or table ) model consists of a single, two - dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. for instance, columns for name and password that might be used as a part of a system security database. each row would have the specific password associated with an individual user. columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. this tabular format is a precursor to the relational model.
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
Lions eat animals
[ "in a different environment from where they live", "in the water", "underground", "in the same environment where they live" ]
Key fact: most predators live in the same environment as their prey
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
A solution could be
[ "pebbles and soil", "peas and corn", "juice and vodka", "toothpaste and bristles" ]
Key fact: a solution is made of one substance dissolved in another substance
C
2
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
A kumquat is more likely than a steak to have
[ "fluid", "seeds", "fibers", "skin" ]
Key fact: fruit contains seeds
B
1
openbookqa
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
matrixdb is a biological database focused on molecular interactions between extracellular proteins and polysaccharides. matrixdb takes into account the multimeric nature of the extracellular proteins ( for example, collagens, laminins and thrombospondins are multimers ). the database was initially released in 2009 and is maintained by the research group of sylvie ricard - blum at umr5246, claude bernard university lyon 1. matrixdb is linked with unigene and the human protein atlas. it also allows users to build customised tissue - and disease - specific interaction networks, which can be further analysed and visualised using cytoscape or medusa. matrixdb is an active member of the international molecular exchange consortium ( imex ), a group of the major public providers of interaction data. other participating databases include the biomolecular interaction network database ( bind ), intact, the molecular interaction database ( mint ), mips, mpact, and biogrid. the databases of imex work together to prevent duplications of effort, collecting data from non - overlapping sources and sharing the curated interaction data. the imex consortium also worked to develop the hupo - psi - mi xml standard format for annotating and exchanging interaction data. matrixdb includes interaction data extracted from the literature by manual curation and offers access to relevant data involving extracellular proteins provided by imex partner databases through the psicquic webservice
Which would a carnivore eat?
[ "fiddleheads", "ramps", "dulse", "blobfish" ]
Key fact: carnivores only eat animals
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Bacteria in the soil feed on
[ "uranium", "expired creatures", "crowd sourced work", "kryptonite" ]
Key fact: In the food chain process bacteria have the role of decomposer
B
1
openbookqa
gun ( also known as graph universe node, gun. js, and gundb ) is an open source, offline - first, real - time, decentralized, graph database written in javascript for the web browser. the database is implemented as a peer - to - peer network distributed across " browser peers " and " runtime peers ". it employs multi - master replication with a custom commutative replicated data type ( crdt ). gun is currently used in the decentralized version of the internet archive. references external links official website gun on github
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
usually plants die or become dormant after the
[ "lowest solstice", "naptime", "a good book", "lunch" ]
Key fact: usually plants die or become dormant during the winter
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
How many times per 365.3 days does an equinox occur?
[ "3", "1", "2", "4" ]
Key fact: an equinox occurs twice per year
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
I need electrical energy to
[ "Go running", "cook some bread", "Ride a bike", "Go swimming" ]
Key fact: a light bulb requires electrical energy to produce light
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
A squirrel expires in the spring and in the fall
[ "the corpse is glowing", "the corpse is decomposed", "the corpse is melted", "the corpse is flying" ]
Key fact: dead organisms rot
B
1
openbookqa
an autopsy ( also referred to as post - mortem examination, obduction, necropsy, or autopsia cadaverum ) is a surgical procedure that consists of a thorough examination of a corpse by dissection to determine the cause, mode, and manner of death ; or the exam may be performed to evaluate any disease or injury that may be present for research or educational purposes. the term necropsy is generally used for non - human animals. autopsies are usually performed by a specialized medical doctor called a pathologist. only a small portion of deaths require an autopsy to be performed, under certain circumstances. in most cases, a medical examiner or coroner can determine the cause of death. purposes of performance autopsies are performed for either legal or medical purposes. autopsies can be performed when any of the following information is desired : manner of death must be determined determine if death was natural or unnatural injury source and extent on the corpse post mortem interval determining the deceased's identity retain relevant organs if it is an infant, determine live birth and viability for example, a forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes. autopsies can be further classified into cases where an external examination suffices, and those where the body is dissected and an internal examination is conducted
decomposition is the process in which the organs and complex molecules of animal and human bodies break down into simple organic matter over time. in vertebrates, five stages of decomposition are typically recognized : fresh, bloat, active decay, advanced decay, and dry / skeletonized. knowing the different stages of decomposition can help investigators in determining the post - mortem interval ( pmi ). the rate of decomposition of human remains can vary due to environmental factors and other factors. environmental factors include temperature, burning, humidity, and the availability of oxygen. other factors include body size, clothing, and the cause of death. stages and characteristics the five stages of decompositionfresh ( autolysis ), bloat, active decay, advanced decay, and dry / skeletonizedhave specific characteristics that are used to identify which stage the remains are in. these stages are illustrated by reference to an experimental study of the decay of a pig corpse. fresh at this stage the remains are usually intact and free of insects. the corpse progresses through algor mortis ( a reduction in body temperature until ambient temperature is reached ), rigor mortis ( the temporary stiffening of the limbs due to chemical changes in the muscles ), and livor mortis ( pooling of the blood on the side of the body that is closest to the ground ). bloat at this stage, the microorganisms residing in the digestive system begin to digest the tissues of the body, excreting gases
a morgue or mortuary ( in a hospital or elsewhere ) is a place used for the storage of human corpses awaiting identification ( id ), removal for autopsy, respectful burial, cremation or other methods of disposal. in modern times, corpses have customarily been refrigerated to delay decomposition. etymology and lexicology the term mortuary dates from the early 14th century, from anglo - french mortuarie, meaning " gift to a parish priest from a deceased parishioner, " from medieval latin mortuarium, noun use of neuter of late latin adjective mortuarius " pertaining to the dead, " from latin mortuus, pp. of mori " to die " ( see mortal ( adj. ) ). the meaning of " place where the deceased are kept temporarily " was first recorded in 1865, as a euphemism for the earlier english term " deadhouse ". the term morgue comes from the french. first used to describe the inner wicket of a prison, where new prisoners were kept so that jailers and turnkeys could recognize them in the future, it took on its modern meaning in fifteenth - century paris, being used to describe part of the chtelet used for the storage and identification of unknown corpses. morgue is predominantly used in north american english, while mortuary is used in the u. k., although both terms are used interchangeably. the euphemisms rose cottage and rainbows end are sometimes
One odd fossil that may have been discovered is
[ "feelings", "love", "vortexes", "poop" ]
Key fact: An example of a fossil is a footprint in a rock
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Compost, small rocks, and organic material make up
[ "air", "stone", "land", "water" ]
Key fact: a plant requires soil for to grow
C
2
openbookqa
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
Above 100 degrees Celsius a kind of water is what?
[ "vapor particles", "solid", "ice", "frigid" ]
Key fact: steam is a kind of water above 100 degrees celsius
A
0
openbookqa
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
A mole can avoid being detected by hawks, owls and other predators by
[ "moving slowly", "setting traps", "traveling beneath soil", "building decoys" ]
Key fact: living underground can be used for hiding from predators
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
" roadrunner ". github. : a dynamic analysis framework designed to facilitate rapid prototyping and experimentation with dynamic analyses for concurrent java programs.
seddb is an online database for sediment geochemistry. seddb is based on a relational database that contains the full range of analytical values for sediment samples, primarily from marine sediment cores, including major and trace element concentrations, radiogenic and stable isotope ratios, and data for all types of material such as organic and inorganic components, leachates, and size fractions. seddb also archives a vast array of metadata relating to the individual sample. examples of seddb metadata are : sample latitude and longitude ; elevation below sea surface ; material analyzed ; analytical methodology ; analytical precision and reference standard measurements. as of april, 2013 seddb contains nearly 750, 000 individual analytical data points of 104, 000 samples. seddb contents have been migrated to the earthchem portal. purpose seddb was developed to complement current geological data systems ( petdb, earthchem, navdat and georoc ) with an integrated and easily accessible compilation of geochemical data of marine and continental sediments to be utilized for sedimentological, geochemical, petrological, oceanographic, and paleoclimate research, as well as for educational purposes. funding and management seddb was developed, operated and maintained by a joint team of disciplinary scientists, data scientists, data managers and information technology developers at the lamontdoherty earth observatory as part of the integrated earth data applications ( ieda ) research group funded by the us national science foundation. seddb was built collaborative
Limestone is formed by water evaporating from a solution of water and mineral and a hard sedimentary rock used as building material and for making
[ "lipstick", "cement", "lemon-lime soda", "mineral water" ]
Key fact: limestone is formed by water evaporating from a solution of water and minerals
B
1
openbookqa
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
Which of the following would be least likely to reproduce?
[ "2 protozoa", "2 oak trees", "2 female cats", "2 bacteria" ]
Key fact: two females can not usually reproduce with each other
C
2
openbookqa
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
A puppy has a kink in his tail, but he lacked it yesterday. The puppy ______ a broken tail.
[ "inherited", "absorbed", "wanted", "acquired" ]
Key fact: the condition of the parts of an organism are acquired characteristics
D
3
openbookqa
structured query language ( sql ) ( pronounced s - q - l ; or alternatively as " sequel " ) is a domain - specific language used to manage data, especially in a relational database management system ( rdbms ). it is particularly useful in handling structured data, i. e., data incorporating relations among entities and variables. introduced in the 1970s, sql offered two main advantages over older readwrite apis such as isam or vsam. firstly, it introduced the concept of accessing many records with one single command. secondly, it eliminates the need to specify how to reach a record, i. e., with or without an index. originally based upon relational algebra and tuple relational calculus, sql consists of many types of statements, which may be informally classed as sublanguages, commonly : data query language ( dql ), data definition language ( ddl ), data control language ( dcl ), and data manipulation language ( dml ). the scope of sql includes data query, data manipulation ( insert, update, and delete ), data definition ( schema creation and modification ), and data access control. although sql is essentially a declarative language ( 4gl ), it also includes procedural elements. sql was one of the first commercial languages to use edgar f. codd's relational model. the model was described in his influential 1970 paper, " a relational model of data for large shared data banks ". despite not entirely ad
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
what do planets orbit?
[ "volcanos", "starlight", "astral beings", "people" ]
Key fact: planets orbit stars
C
2
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
describe hotspots and the volcanic activity they create.
in statistics, a volcano plot is a type of scatter - plot that is used to quickly identify changes in large data sets composed of replicate data. it plots significance versus fold - change on the y and x axes, respectively. these plots are increasingly common in omic experiments such as genomics, proteomics, and metabolomics where one often has a list of many thousands of replicate data points between two conditions and one wishes to quickly identify the most meaningful changes. a volcano plot combines a measure of statistical significance from a statistical test ( e. g., a p value from an anova model ) with the magnitude of the change, enabling quick visual identification of those data - points ( genes, etc. ) that display large magnitude changes that are also statistically significant. a volcano plot is a sophisticated data visualization tool used in statistical and genomic analyses to illustrate the relationship between the magnitude of change and statistical significance. it is constructed by plotting the negative logarithm ( base 10 ) of the p - value on the y - axis, ensuring that data points with lower p - valuesindicative of higher statistical significanceare positioned toward the top of the plot. the x - axis represents the logarithm of the fold change between two conditions, allowing for a symmetric representation of both upregulated and downregulated changes relative to the center. this transformation ensures that equivalent deviations in either direction are equidistant from the origin, facilitating intuitive interpretation
A bunch of lava that is sitting somewhere could create on its own
[ "an icy, frozen villa", "a flat raised area", "a new oak tree", "a happy landscape portrait" ]
Key fact: a plateau is formed by a buildup of cooled lava
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
A person wanting to find a live bear in a forest will have difficulty because bears
[ "avoid humanity", "are domesticated", "are friendly", "are camouflaged" ]
Key fact: animals usually distance themselves from humans
A
0
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
A full moon is visible
[ "bimonthly", "biweekly", "every two months", "every four weeks" ]
Key fact: each of the moon 's phases usually occurs once per month
D
3
openbookqa
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
To protect yourself from blisters you may try
[ "a bandaid", "fur", "a rainbow", "a chicken" ]
Key fact: as the thickness of an object increases , the resistance to damage of that object will increase
A
0
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
A series of horrible earthquakes can effect deer populations by forcing
[ "relocation", "planting seeds", "breeding", "studying" ]
Key fact: natural disasters can cause animals to leave an environment
A
0
openbookqa
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
Which contain seeds?
[ "mandarins", "corn", "carrots", "potatoes" ]
Key fact: fruit contains seeds
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the plant dna c - values database ( https : / / cvalues. science. kew. org / ) is a comprehensive catalogue of c - value ( nuclear dna content, or in diploids, genome size ) data for land plants and algae. the database was created by prof. michael d. bennett and dr. ilia j. leitch of the royal botanic gardens, kew, uk. the database was originally launched as the " angiosperm dna c - values database " in april 1997, essentially as an online version of collected data lists that had been published by prof. bennett and colleagues since the 1970s. release 1. 0 of the more inclusive plant dna c - values database was launched in 2001, with subsequent releases 2. 0 in january 2003 and 3. 0 in december 2004. in addition to the angiosperm dataset made available in 1997, the database has been expanded taxonomically several times and now includes data from pteridophytes ( since 2000 ), gymnosperms ( since 2001 ), bryophytes ( since 2001 ), and algae ( since 2004 ) ( see ( 1 ) for update history ). ( note that each of these subset databases is cited individually as they may contain different sets of authors ). the most recent release of the database ( release 7. 1 ) went live in april 2019. it contains data for 12, 273 species of plants comprising 10, 770 angiosperms, 421 gymnos
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
In the food chain process what has the role of producer?
[ "eaters", "flora", "carnivore", "consumers" ]
Key fact: In the food chain process a green plant has the role of producer
B
1
openbookqa
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
do all insects have to undergo every stage of change before becoming full grown?
[ "all of these", "insects are born live", "pupa stage is sometimes skipped", "pupa is a required stage" ]
Key fact: incomplete metamorphosis is when an insect reaches the adult stage without being a pupa
C
2
openbookqa
when an insect egg hatches, a larva emerges. the larva eats and grows and then enters the pupa stage. the pupa is immobile and may be encased in a cocoon. during the pupa stage, the insect goes through metamorphosis. tissues and appendages of the larva break down and reorganize into the adult form. how did such an incredible transformation evolve? metamorphosis is actually very advantageous. it allows functions to be divided between life stages. each stage can evolve adaptations to suit it for its specific functions without affecting the adaptations of the other stage.
after hatching, most arthropods go through one or more larval stages before reaching adulthood. the larvae may look very different from the adults. they change into the adult form in a process called metamorphosis. during metamorphosis, the arthropod is called a pupa. it may or may not spend this stage inside a special container called a cocoon. a familiar example of arthropod metamorphosis is the transformation of a caterpillar ( larva ) into a butterfly ( adult ) ( see figure below ). distinctive life stages and metamorphosis are highly adaptive. they allow functions to be divided among different life stages. each life stage can evolve adaptations to suit it for its specific functions without affecting the adaptations of the other stages.
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
if an object undergoes chemical change then that object will have new chemical what?
[ "warmth", "temperature", "appearance", "attributes" ]
Key fact: if an object undergoes chemical change then that object will have new chemical properties
D
3
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a heat map ( or heatmap ) is a 2 - dimensional data visualization technique that represents the magnitude of individual values within a dataset as a color. the variation in color may be by hue or intensity. in some applications such as crime analytics or website click - tracking, color is used to represent the density of data points rather than a value associated with each point. " heat map " is a relatively new term, but the practice of shading matrices has existed for over a century. history heat maps originated in 2d displays of the values in a data matrix. larger values were represented by small dark gray or black squares ( pixels ) and smaller values by lighter squares. the earliest known example dates to 1873, when toussaint loua used a hand - drawn and colored shaded matrix to visualize social statistics across the districts of paris. the idea of reordering rows and columns to reveal structure in a data matrix, known as seriation, was introduced by flinders petrie in 1899. in 1950, louis guttman developed the scalogram, a method for ordering binary matrices to expose a one - dimensional scale structure. in 1957, peter sneath displayed the results of a cluster analysis by permuting the rows and the columns of a matrix to place similar values near each other according to the clustering. this idea was implemented by robert ling in 1973 with a computer program called shade. ling used overstruck printer characters to represent different shades of gray, one character -
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
Which converts sunlight, water, and carbon dioxide to grow?
[ "a thing that flowers", "a thing that goes around the Earth", "a thing that flies in the sky", "a thing that lives in caves" ]
Key fact: a plant requires photosynthesis to grow
A
0
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
in natural language and physical science, a physical object or material object ( or simply an object or body ) is a contiguous collection of matter, within a defined boundary ( or surface ), that exists in space and time. usually contrasted with abstract objects and mental objects. also in common usage, an object is not constrained to consist of the same collection of matter. atoms or parts of an object may change over time. an object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. however the laws of physics only apply directly to objects that consist of the same collection of matter. in physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3 - dimensional space. each object has a unique identity, independent of any other properties. two objects may be identical, in all properties except position, but still remain distinguishable. in most cases the boundaries of two objects may not overlap at any point in time. the property of identity allows objects to be counted. examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies ( particulate or otherwise ). discrete objects are in contrast to continuous media. the common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. in modern physics, " extension " is understood in terms of the
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
if a student asked a teacher about the size of a certain item, what could be an indicator?
[ "the volume of the item", "the color of the item", "the luster of the item", "the depth of the item" ]
Key fact: the volume of an object can be used to describe the size of that object
A
0
openbookqa
a data item describes an atomic state of a particular object concerning a specific property at a certain time point. a collection of data items for the same object at the same time forms an object instance ( or table row ). any type of complex information can be broken down to elementary data items ( atomic state ). data items are identified by object ( o ), property ( p ) and time ( t ), while the value ( v ) is a function of o, p and t : v = f ( o, p, t ). values typically are represented by symbols like numbers, texts, images, sounds or videos. values are not necessarily atomic. a value's complexity depends on the complexity of the property and time component. when looking at databases or xml files, the object is usually identified by an object name or other type of object identifier, which is part of the " data ". properties are defined as columns ( table row ), properties ( object instance ) or tags ( xml ). often, time is not explicitly expressed and is an attribute applying to the complete data set. other data collections provide time on the instance level ( time series ), column level, or even attribute / property level. = = references = =
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
an array database management system or array dbms provides database services specifically for arrays ( also called raster data ), that is : homogeneous collections of data items ( often called pixels, voxels, etc. ), sitting on a regular grid of one, two, or more dimensions. often arrays are used to represent sensor, simulation, image, or statistics data. such arrays tend to be big data, with single objects frequently ranging into terabyte and soon petabyte sizes ; for example, today's earth and space observation archives typically grow by terabytes a day. array databases aim at offering flexible, scalable storage and retrieval on this information category. overview in the same style as standard database systems do on sets, array dbmss offer scalable, flexible storage and flexible retrieval / manipulation on arrays of ( conceptually ) unlimited size. as in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page a unit of disk access on server, typically 4 kb while array objects easily can span several media. the prime task of the array storage manager is to give fast access to large arrays and sub - arrays. to this end, arrays get partitioned, during insertion, into
Digestion is when stomach acid breaks down what?
[ "food essays", "sustenance", "water", "air" ]
Key fact: digestion is when stomach acid breaks down food
B
1
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
food is chemical energy stored in organic molecules.
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
Reducing bacteria in food prevents what?
[ "electricity", "maladies", "observation", "signals" ]
Key fact: reducing bacteria in food prevents illness in people
B
1
openbookqa
verde ( visualizing energy resources dynamically on the earth ) is a visualization and analysis capability of the united states department of energy ( doe ). the system, developed and maintained by oak ridge national laboratory ( ornl ), provides wide - area situational understanding of the u. s. electric grid. enabling grid monitoring, weather impacts prediction and analysis, verde supports preparedness and response to potentially large outage events. as a real - time geo - visualization capability, it characterizes the dynamic behavior of the grid over interconnects giving views into bulk transmission lines as well as county - level power distribution status. by correlating grid behaviors with cyber events, the platform also enables a method to link cyber - to - infrastructure dependencies. verde integrates different data elements from other available on - line services, databases, and social media. the tennessee valley authority ( tva ) and other major utilities spanning multiple regions across the electric grid interconnection provide real - time status of their systems. social media sources such as twitter provide additional data - sources for visualization and analyses. the verde software, which was developed by the computational sciences and engineering division ( csed ) of ornl, is used outside of the doe for a number of related national security requirements. references shankar, m., stovall, j., sorokine, a., bhaduri, b., king, t. ( date 2024 july 2008 ) power and energy society
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
The elements will over time level
[ "mountains", "the seas", "god", "giants" ]
Key fact: soil is formed by weathering
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
Ground has
[ "electricity", "divorce", "elements", "common understanding" ]
Key fact: Earth is made of rock
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
split up is an intelligent decision support system, which makes predictions about the distribution of marital property following divorce in australia. it is designed to assist judges, registrars of the family court of australia, mediators and lawyers. split up operates as a hybrid system, combining rule based reasoning with neural network theory. rule based reasoning operates within strict parameters, in the form : if < condition ( s ) > then < action >. : 196, 202 neural networks, by contrast, are considered to be better suited to generate decisions in uncertain domains, since they can be taught to weigh the factors considered by judicial decision makers from case data. yet, they do not provide an explanation for the conclusions they reach. split _ up, with a view to overcome this flaw, uses argument structures proposed by toulmin as the basis for representations from which explanations can be generated. : 186 application in australian family law, a judge in determining the distribution of property will : identify the assets of the marriage included in the common pool establish what percentage of the common pool each party will receive determine a final property order in line with the decisions made in 1. and 2. split _ up implements step 1 and 2 : the common pool determination and the prediction of a percentage split. the common pool determination since the determination of marital property is rule based, it is implemented using directed graphs. : 269 however, the percentage split between the parties is discretionary in that a judge has a wide discretion to look at each party's contributions
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
What does the earth orbit that causes the seasons to change?
[ "plasma star", "venus", "saturn", "mercury" ]
Key fact: seasonal changes are made in response to changes in the environment
A
0
openbookqa
astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, machine learning, informatics, and information / communications technologies. the field is closely related to astrostatistics. data - driven astronomy ( dda ) refers to the use of data science in astronomy. several outputs of telescopic observations and sky surveys are taken into consideration and approaches related to data mining and big data management are used to analyze, filter, and normalize the data set that are further used for making classifications, predictions, and anomaly detections by advanced statistical approaches, digital image processing and machine learning. the output of these processes is used by astronomers and space scientists to study and identify patterns, anomalies, and movements in outer space and conclude theories and discoveries in the cosmos. background astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data - oriented astronomy. early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical virtual observatory initiatives. further development of the field, along with astronomy community endorsement, was presented to the national research council ( united states ) in 2009 in the astroinformatics " state of the profession " position paper for the 2010 astronomy and astrophysics decadal survey. that position paper provided the basis for the subsequent more detailed
the real - time neutron monitor database ( or nmdb ) is a worldwide network of standardized neutron monitors, used to record variations of the primary cosmic rays. the measurements complement space - based cosmic ray measurements. unlike data from satellite experiments, neutron monitor data has never been available in high resolution from many stations in real - time. the data is often only available from the individual stations website, in varying formats, and not in real - time. to overcome this deficit, the european commission is supporting the real - time neutron monitor database ( nmdb ) as an e - infrastructures project in the seventh framework programme in the capacities section. stations that do not have 1 - minute resolution will be supported by the development of an affordable standard registration system that will submit the measurements to the database via the internet in real - time. this resolves the problem of different data formats and for the first time allows to use real - time cosmic ray measurements for space weather predictions ( steigies, klein et al. ) besides creating a database and developing applications working with this data, a part of the project is dedicated to create a public outreach website to inform about cosmic rays and possible effects on humans, technological systems, and the environment ( mavromichalaki et al. ) see also altitude see test european platform ( astep ) references external links nmdb homepage
plasma - based processes play a crucial role in the behavior and evolution of stars and galaxies. plasma, often referred to as the fourth state of matter, is an ionized gas consisting of ions, electrons, and neutral particles. in astrophysical contexts, plasma is the primary component of stars, interstellar and intergalactic media, and various other celestial phenomena. the effects of plasma - based processes on stars and galaxies can be broadly categorized as follows : 1. star formation : plasma processes are essential in the formation of stars. the collapse of molecular clouds, composed mainly of plasma, leads to the formation of protostars. as the protostar continues to accrete mass from the surrounding plasma, its core temperature and pressure increase, eventually leading to nuclear fusion and the birth of a star. 2. stellar evolution : plasma processes govern the nuclear fusion reactions that occur within stars. these reactions convert hydrogen into helium and release energy in the form of radiation, which balances the gravitational forces and prevents the star from collapsing. the balance between these forces determines the star's size, temperature, and luminosity. as stars evolve, they undergo various plasma - based processes, such as convection, mass loss, and the production of heavier elements through nucleosynthesis. 3. supernovae and compact objects : when massive stars exhaust their nuclear fuel, they undergo a core - collapse supernova explosion. this event ejects a significant amount of plasma into the surrounding interstellar medium, en
If birds are singing, the sun is shining, and temps are high, then
[ "daylight is short", "luminescence is long", "winter is here", "nights are freezing" ]
Key fact: the amount of daylight is greatest in the summer
B
1
openbookqa
a temporal database stores data relating to time instances. it offers temporal data types and stores information relating to past, present and future time. temporal databases can be uni - temporal, bi - temporal or tri - temporal. more specifically the temporal aspects usually include valid time, transaction time and / or decision time. valid time is the time period during or event time at which a fact is true in the real world. transaction time is the time at which a fact was recorded in the database. decision time is the time at which the decision was made about the fact. used to keep a history of decisions about valid times. types uni - temporal a uni - temporal database has one axis of time, either the validity range or the system time range. bi - temporal a bi - temporal database has two axes of time : valid time transaction time or decision time tri - temporal a tri - temporal database has three axes of time : valid time transaction time decision time this approach introduces additional complexities. temporal databases are in contrast to current databases ( not to be confused with currently available databases ), which store only facts which are believed to be true at the current time. features temporal databases support managing and accessing temporal data by providing one or more of the following features : a time period datatype, including the ability to represent time periods with no end ( infinity or forever ) the ability to define valid and transaction time period attributes and bitemporal relations system - maintained transaction time temporal primary keys, including
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
Poison causes harm to
[ "car engines", "televisions", "wombats", "corpses" ]
Key fact: poison causes harm to living things
C
2
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
genedb was a genome database for eukaryotic and prokaryotic pathogens. references external links http : / / www. genedb. org
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
What contains chlorophyll?
[ "plastic", "water", "green organelles", "paper" ]
Key fact: a chloroplast contains chlorophyll
C
2
openbookqa
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
Two ships grazing each other as they pass will
[ "cause mayhem", "sink them", "speed them up", "slow them down" ]
Key fact: friction acts to counter the motion of two objects when their surfaces are touching
D
3
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
exploitdb, sometimes stylized as exploit database or exploit - database, is a public and open source vulnerability database maintained by offensive security. it is one of the largest and most popular exploit databases in existence. while the database is publicly available via their website, the database can also be used by utilizing the searchsploit command - line tool which is native to kali linux. the database also contains proof - of - concepts ( pocs ), helping information security professionals learn new exploit variations. in ethical hacking and penetration testing guide, rafay baloch said exploit - db had over 20, 000 exploits, and was available in backtrack linux by default. in ceh v10 certified ethical hacker study guide, ric messier called exploit - db a " great resource ", and stated it was available within kali linux by default, or could be added to other linux distributions. the current maintainers of the database, offensive security, are not responsible for creating the database. the database was started in 2004 by a hacker group known as milw0rm and has changed hands several times. as of 2023, the database contained 45, 000 entries from more than 9, 000 unique authors. see also offensive security offensive security certified professional references external links official website
A shout is made into the night sky and carries
[ "a paper bag", "a baby", "a stick", "on the gales" ]
Key fact: air is a vehicle for sound
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
A person wants to make some tomato plants grow, so they get
[ "older", "dirt", "corn", "married" ]
Key fact: soil is a renewable resource for growing plants
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
the plant proteome database is a national science foundation - funded project to determine the biological function of each protein in plants. it includes data for two plants that are widely studied in molecular biology, arabidopsis thaliana and maize ( zea mays ). initially the project was limited to plant plastids, under the name of the plastid pdb, but was expanded and renamed plant pdb in november 2007. see also proteome references external links plant proteome database home page
A piece of pizza is placed within a box and becomes this when something is conducted throughout the box into the food:
[ "toasted", "meaty", "frozen", "cool" ]
Key fact: thermal conduction is when materials conduct heat through those materials
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a relational database ( rdb ) is a database based on the relational model of data, as proposed by e. f. codd in 1970. a relational database management system ( rdbms ) is a type of database management system that stores data in a structured format using rows and columns. many relational database systems are equipped with the option of using sql ( structured query language ) for querying and updating the database. history the concept of relational database was defined by e. f. codd at ibm in 1970. codd introduced the term relational in his research paper " a relational model of data for large shared data banks ". in this paper and later papers, he defined what he meant by relation. one well - known definition of what constitutes a relational database system is composed of codd's 12 rules. however, no commercial implementations of the relational model conform to all of codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum : present the data to the user as relations ( a presentation in tabular form, i. e. as a collection of tables with each table consisting of a set of rows and columns ) ; provide relational operators to manipulate the data in tabular form. in 1974, ibm began developing system r, a research project to develop a prototype rdbms. the first system sold as an rdbms was multics relational data store ( june 1976 ). oracle was released in 1979 by
John was able to read at night even though the electricity had gone out. John was using
[ "a pepperoni and cheese pizza", "a heavy maple desk", "an item made from a petroleum product with a cord sticking out the top", "an old step ladder" ]
Key fact: a candle is a source of light when it is burned
C
2
openbookqa
the spiral staircase in figure below also contains an inclined plane. do you see it? the stairs that wrap around the inside of the walls make up the inclined plane. the spiral staircase is an example of a screw. a screw is a simple machine that consists of an inclined plane wrapped around a cylinder or cone. no doubt you are familiar with screws like the wood screw in figure below. the screw top of the container in the figure is another example. screws move objects to a higher elevation ( or greater depth ) by increasing the force applied.
in computer science, a stack is an abstract data type that serves as a collection of elements, with two main operations : push, which adds an element to the collection, and pop, which removes the most recently added element that was not yet removed. additionally, a peek operation can, without modifying the stack, return the value of the last element added. calling this structure a stack is by analogy to a set of physical items stacked one atop another, such as a stack of plates. the order in which an element added to or removed from a stack is described as last in, first out, referred to by the acronym lifo. as with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing a datum deeper in the stack may require taking off multiple other items first. considered as a linear data structure, or more abstractly a sequential collection, the push and pop operations occur only at one end of the structure, referred to as the top of the stack.
in computer science, a stack is an abstract data type that serves as a collection of elements with two main operations : push, which adds an element to the collection, and pop, which removes the most recently added element. additionally, a peek operation can, without modifying the stack, return the value of the last element added. the name stack is an analogy to a set of physical items stacked one atop another, such as a stack of plates. the order in which an element added to or removed from a stack is described as last in, first out, referred to by the acronym lifo. as with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing a datum deeper in the stack may require removing multiple other items first. considered a sequential collection, a stack has one end which is the only position at which the push and pop operations may occur, the top of the stack, and is fixed at the other end, the bottom. a stack may be implemented as, for example, a singly linked list with a pointer to the top element. a stack may be implemented to have a bounded capacity. if the stack is full and does not contain enough space to accept another element, the stack is in a state of stack overflow. history stacks entered the computer science literature in 1946, when alan turing used the terms " bury " and " unbury " as a means of calling and returning from subroutines
Turning a piece of paper into a ball is an example of
[ "folding", "flattening", "squashing", "restoring" ]
Key fact: crumple means change shape from smooth into compacted by physical force
C
2
openbookqa
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
DNA is a vehicle for passing inherited characteristics from parent to what?
[ "pets", "homes", "younglings", "food" ]
Key fact: DNA is a vehicle for passing inherited characteristics from parent to offspring
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
A navigator may be in charge of
[ "shoveling a sidewalk", "clearing a tree", "conducting a car", "riding a bike" ]
Key fact: An example of navigation is directing a boat
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems. theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real - time databases, managing uncertain data and probabilistic databases, and web data. most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. corresponding results for other data models, such as object - oriented or semi - structured models, or, more recently, graph data models and xml, are often derivable from those for the relational model. database theory helps one to understand the complexity and power of query languages and their connection to logic. starting from relational algebra and first - order logic ( which are equivalent by codd's theorem ) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as datalog were studied. the theory also explores foundations of query optimization and data integration. here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm. the main research conferences in the area are the acm symposium on principles of database systems ( pods
If a room is going to be humid, or dry, depends on how much water vapor is in the air, so if a room wants to be humid
[ "run a bath", "use a dehumidifier", "open a window", "hope it rains" ]
Key fact: humidity is the amount of water vapor in the air
A
0
openbookqa
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
sqsh, a shell available with some sql implementations for database queries and other tasks. google shell, a browser - based front - end for google search
A dark cave fulfills what need for a wild, roaming grizzly bear?
[ "exercise for its health", "shelter for its safety", "food for it to eat", "friends for some companionship" ]
Key fact: an animal requires shelter
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
health 3. 0 is a health - related extension of the concept of web 3. 0 whereby the users'interface with the data and information available on the web is personalized to optimize their experience. this is based on the concept of the semantic web, wherein websites'data is accessible for sorting in order to tailor the presentation of information based on user preferences. health 3. 0 will use such data access to enable individuals to better retrieve and contribute to personalized health - related information within networked electronic health records, and social networking resources. health 3. 0 has also been described as the idea of semantically organizing electronic health records to create an open healthcare information architecture. health care could also make use of social media, and incorporate virtual tools for enhanced interactions between health care providers and consumers / patients. goals improved access to health related information on the web via semantic and networked resources will facilitate an improved understanding of health issues with the goal of increasing patient self - management, preventative care and enhancing health professional expertise. health 3. 0 will foster the creation and maintenance of supportive virtual communities within which individuals can help one another understand, cope with, and manage common health - related issues. personalized social networking resources can also serve as a medium for health professionals to improve individuals'access to healthcare expertise, and to facilitate health professional - to - many - patients communication with the goal of improved acceptance, understanding and adherence to best therapeutic options. " digital healing " has been described as a goal of
What activity changes the local environment the most?
[ "people sometimes camping", "moderate rain", "over-logging", "rabbit breeding" ]
Key fact: humans changing an environment sometimes causes that environment to be destroyed
C
2
openbookqa
a semi - circular bund ( also known as a demi - lune or half - moon ) is a rainwater harvesting technique consisting in digging semi - lunar holes in the ground with the opening perpendicular to the flow of water. these techniques are particularly beneficial in areas where rainfall is scarce and irregular, namely arid and semi - arid regions. semi - circular bunds primarily serve to slow down and retain runoff, ensuring that the plants inside them receive necessary water. background crop cultivation, grazing, and forestry are particularly challenging in drylands. local communities often lack the financial and practical resources to establish irrigation systems or use chemical fertilizers. as such, these are generally considered infeasible solutions for these areas. as a result, rainfall harvesting techniques are widely adopted to efficiently retain rainwater while minimizing the need for additional materials and financial investment. there are various rainfall harvesting techniques, all sharing the fundamental principle of constructing or excavating structures using natural materials such as soil and stones. these techniques include planting pits, infiltration basin and microbasins, and cross - slope barriers. semi - circular bunds fall in the subcategory of microcatchment water harvesting. beyond their primary function of reducing runoff for agricultural purposes, these methods offer additional benefits, such as providing extra drinking water for livestock, enabling land reclamation, enhancing soil fertility, accelerating timber growth for firewood, and influencing regional atmospheric patterns, potentially leading to increased precipitation. origins and recent development semi - circular bun
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
Where might you find eggs?
[ "forest", "space", "lava", "ocean" ]
Key fact: some birds live in forests
A
0
openbookqa
metpetdb is a relational database and repository for global geochemical data on and images collected from metamorphic rocks from the earth's crust. metpetdb is designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at rensselaer polytechnic institute as part of the national cyberinfrastructure initiative and supported by the national science foundation. metpetdb is unique in that it incorporates image data collected by a variety of techniques, e. g. photomicrographs, backscattered electron images ( sem ), and x - ray maps collected by wavelength dispersive spectroscopy or energy dispersive spectroscopy. purpose metpetdb was built for the purpose of archiving published data and for storing new data for ready access to researchers and students in the petrologic community. this database facilitates the gathering of information for researchers beginning new projects and permits browsing and searching for data relating to anywhere on the globe. metpetdb provides a platform for collaborative studies among researchers anywhere on the planet, serves as a portal for students beginning their studies of metamorphic geology, and acts as a repository of vast quantities of data being collected by researchers globally. design the basic structure of metpetdb is based on a geologic sample and derivative subsamples. geochemical data are linked to subsamples and the minerals within them, while image data can relate to samples or subsamples. metpetdb is designed to store the distinct spatial / textural context
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
blazegraph is an open source triplestore and graph database, written in java. it has been abandoned since 2020 and is known to be used in production by wmde for the wikidata sparql endpoint. it is licensed under the gnu gpl ( version 2 ). amazon acquired the blazegraph developers and the blazegraph open source development was essentially stopped in april 2018. early history the system was first known as bigdata. since release of version 1. 5 ( 12 february 2015 ), it is named blazegraph. prominent users the wikimedia foundation uses blazegraph for the wikidata query service, which is a sparql endpoint. sophox, a fork of the wikidata query service, specializes in openstreetmap queries. the datatourisme project uses blazegraph as the database platform ; however, graphql is used as the query language instead of sparql. notable features rdf * an alternative approach to rdf reification, which gives rdf graphs capabilities of lpg graphs ; as the consequence of the previous, ability of querying graphs both in sparql and gremlin ; as an alternative to gremlin querying, gas abstraction over rdf graphs support in sparql ; the service syntax of federated queries for functionality extending ; managed behavior of the query plan generator ; reusable named subqueries. acqui -
A seaman will likely use this tool more than other:
[ "plate with a plug", "candle with a wick", "disc with an arrow", "card with a strip" ]
Key fact: a compass is used to navigate oceans
C
2
openbookqa
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
a wholesaler might have a tub file with cards for frequent customers and for each inventory item. instead of keypunching a set of cards for each purchase order, a clerk would pull out one customer card and then a card for each item that customer ordered. the resulting deck could then be run through a tabulating machine to produce an invoice. in this example item cards also provided inventory control, since each card presumably represented one item in stock, so it was simple to check availability and schedule reordering.
an optical disc is a flat, usually disc - shaped object that stores information in the form of physical variations on its surface that can be read with the aid of a beam of light. optical discs can be reflective, where the light source and detector are on the same side of the disc, or transmissive, where light shines through the disc to be detected on the other side. optical discs can store analog information ( e. g. laserdisc ), digital information ( e. g. dvd ), or store both on the same disc ( e. g. cd video ). their main uses are the distribution of media and data, and long - term archival. design and technology the encoding material sits atop a thicker substrate ( usually polycarbonate ) that makes up the bulk of the disc and forms a dust defocusing layer. the encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track. the data are stored on the disc with a laser or stamping machine, and can be accessed when the data path is illuminated with a laser diode in an optical disc drive that spins the disc at speeds of about 200 to 4, 000 rpm or more, depending on the drive type, disc format, and the distance of the read head from the center of the disc ( outer tracks are read at a higher data speed due to higher linear velocities at the same angular velocities ). most
A river rushes and pebbles are smacked around one another until
[ "they are wet", "they are clear", "they are rough", "they are velvety" ]
Key fact: contact between rocks over long periods of time causes rocks to smooth
D
3
openbookqa
a flat - file database is a database stored in a file called a flat file. records follow a uniform format, and there are no structures for indexing or recognizing relationships between records. the file is simple. a flat file can be a plain text file ( e. g. csv, txt or tsv ), or a binary file. relationships can be inferred from the data in the database, but the database format itself does not make those relationships explicit. the term has generally implied a small database, but very large databases can also be flat. overview plain text files usually contain one record per line. examples of flat files include / etc / passwd and / etc / group on unix - like operating systems. another example of a flat file is a name - and - address list with the fields name, address and phone number. flat files are typically either delimiter - separated or fixed - width. delimiter - separated values in delimiter - separated values files, the fields are separated by a character or string called the delimiter. common variants are comma - separated values ( csv ) where the delimiter is a comma, tab - separated values ( tsv ) where the delimiter is the tab character ), space - separated values and vertical - bar - separated values ( delimiter is | ). if the delimiter is allowed inside a field, there needs to be a way to distinguish delimiters characters or
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
What orbits the earth 13 times per year?
[ "night illuminator", "clouds", "oceans", "moon pictures" ]
Key fact: the moon orbiting the Earth approximately occurs 13 times per year
A
0
openbookqa
starlight is a software product originally developed at pacific northwest national laboratory and now by future point systems. it is an advanced visual analysis environment. in addition to using information visualization to show the importance of individual pieces of data by showing how they relate to one another, it also contains a small suite of tools useful for collaboration and data sharing, as well as data conversion, processing, augmentation and loading. the software, originally developed for the intelligence community, allows users to load data from xml files, databases, rss feeds, web services, html files, microsoft word, powerpoint, excel, csv, adobe pdf, txt files, etc. and analyze it with a variety of visualizations and tools. the system integrates structured, unstructured, geospatial, and multimedia data, offering comparisons of information at multiple levels of abstraction, simultaneously and in near real - time. in addition starlight allows users to build their own named entity - extractors using a combination of algorithms, targeted normalization lists and regular expressions in the starlight data engineer ( sde ). as an example, starlight might be used to look for correlations in a database containing records about chemical spills. an analyst could begin by grouping records according to the cause of the spill to reveal general trends. sorting the data a second time, they could apply different colors based on related details such as the company responsible, age of equipment or geographic location. maps and photographs could be integrated into
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
A bird such as a penguin can survive in arctic weather due to
[ "sunlight", "bears", "weather", "feathers" ]
Key fact: thick feathers can be used for keeping warm
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
over the last two centuries many environmental chemical observations have been made from a variety of ground - based, airborne, and orbital platforms and deposited in databases. many of these databases are publicly available. all of the instruments mentioned in this article give online public access to their data. these observations are critical in developing our understanding of the earth's atmosphere and issues such as climate change, ozone depletion and air quality. some of the external links provide repositories of many of these datasets in one place. for example, the cambridge atmospheric chemical database, is a large database in a uniform ascii format. each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent pv latitude. ground - based and balloon observations ndsc observations. the network for the detection for stratospheric change ( ndsc ) is a set of high - quality remote - sounding research stations for observing and understanding the physical and chemical state of the stratosphere. ozone and key ozone - related chemical compounds and parameters are targeted for measurement. the ndsc is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the international ozone commission, the united nations environment programme ( unep ), and the world meteorological organization ( wmo ). the primary instruments and measurements are : ozone lidar ( vertical profiles of ozone from the tropopause to at least 40 km altitude
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
What are predators?
[ "herbivores", "plant eaters", "meat devourers", "peaceful" ]
Key fact: carnivores are predators
C
2
openbookqa
dr. duke's phytochemical and ethnobotanical databases is an online database developed by james a. duke at the usda. the databases report species, phytochemicals, and biological activity, as well as ethnobotanical uses. the current phytochemical and ethnobotanical databases facilitate plant, chemical, bioactivity, and ethnobotany searches. a large number of plants and their chemical profiles are covered, and data are structured to support browsing and searching in several user - focused ways. for example, users can get a list of chemicals and activities for a specific plant of interest, using either its scientific or common name download a list of chemicals and their known activities in pdf or spreadsheet form find plants with chemicals known for a specific biological activity display a list of chemicals with their ld toxicity data find plants with potential cancer - preventing activity display a list of plants for a given ethnobotanical use find out which plants have the highest levels of a specific chemical references to the supporting scientific publications are provided for each specific result. also included are links to nutritional databases, plants and cancer treatments and other plant - related databases. the content of the database is licensed under the creative commons cc0 public domain. external links dr. duke's phytochemical and ethnobotanical databases references ( dataset ) u. s. department of agriculture, agricultural research service. 1992 - 2016
the vertebrate genome annotation ( vega ) database is a biological database dedicated to assisting researchers in locating specific areas of the genome and annotating genes or regions of vertebrate genomes. the vega browser is based on ensembl web code and infrastructure and provides a public curation of known vertebrate genes for the scientific community. the vega website is updated frequently to maintain the most current information about vertebrate genomes and attempts to present consistently high - quality annotation of all its published vertebrate genomes or genome regions. vega was developed by the wellcome trust sanger institute and is in close association with other annotation databases, such as zfin ( the zebrafish information network ), the havana group and genbank. manual annotation is currently more accurate at identifying splice variants, pseudogenes, polyadenylation features, non - coding regions and complex gene arrangements than automated methods. history the vertebrate genome annotation ( vega ) database was first made public in 2004 by the wellcome trust sanger institute. it was designed to view manual annotations of human, mouse and zebrafish genomic sequences, and it is the central cache for genome sequencing centers to deposit their annotation of human chromosomes. manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. annotation tools
the bitterdb is a database of compounds that were reported to taste bitter to humans. the aim of the bitterdb database is to gather information about bitter - tasting natural and synthetic compounds, and their cognate bitter taste receptors ( t2rs or tas2rs ). summary the bitterdb includes over 670 compounds that were reported to taste bitter to humans. the compounds can be searched by name, chemical structure, similarity to other bitter compounds, association with a particular human bitter taste receptor, and by other properties as well. the database also contains information on mutations in bitter taste receptors that were shown to influence receptor activation by bitter compounds. database overview bitter compounds bitterdb currently contains more than 670 compounds that were cited in the literature as bitter. for each compound, the database offers information regarding its molecular properties, references for the compounds bitterness, including additional information about the bitterness category of the compound ( e. g. a bitter - sweet or slightly bitter annotation ), different compound identifiers ( smiles, cas registry number, iupac systematic name ), an indication whether the compound is derived from a natural source or is synthetic, a link to the compounds pubchem entry and different file formats for downloading ( sdf, image, smiles ). over 200 bitter compounds have been experimentally linked to their corresponding human bitter taste receptors. for those compounds, bitterdb provides additional information, including links to the publications indicating these ligandreceptor interactions, the effective concentration for receptor
A chipmunk will consume all of these things with just one that it refuses to:
[ "jerky", "grapes", "nuts", "acorn" ]
Key fact: a chipmunk eats acorns
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
A circuit is parallel when more than one pathway has flowing what?
[ "ideas", "stocks", "water", "zapping energy" ]
Key fact: if electricity flows along more than one pathway then the circuit is parallel
D
3
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a chemical database is a database specifically designed to store chemical information. this information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data. types of chemical databases bioactivity database bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. chemical structures chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper ( 2d structural formulae ). while these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. small molecules ( also called ligands in drug design applications ), are usually represented using lists of atoms and their connections. large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. radioactive isotopes are also represented, which is an important attribute for some applications. large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory. literature database chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. this type of database includes stn, scifinder, and reaxys. links to literature are also included in many databases that focus on chemical characterization. crystallographic database crystallographic databases store x - ray crystal structure data. common examples include protein data bank and cambridge structural database. nmr spectra database nmr
until the 1980s, databases were viewed as computer systems that stored record - oriented and business data such as manufacturing inventories, bank records, and sales transactions. a database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. in the late 1980s the concept of an intelligent database was put forward as a system that manages information ( rather than data ) in a way that appears natural to users and which goes beyond simple record keeping. the term was introduced in 1989 by the book intelligent databases by kamran parsaye, mark chignell, setrag khoshafian and harry wong. the concept postulated three levels of intelligence for such systems : high level tools, the user interface and the database engine. the high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. this layer often relies on the use of artificial intelligence techniques. the user interface uses hypermedia in a form that uniformly manages text, images and numeric data. the intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. in the twenty - first century, intelligent databases have now become widespread, e. g. hospital databases can now call up patient histories consisting of charts, text and x - ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis. external links intelligent databases, book
During a thunderstorm an animal is most likely going to relocate to a
[ "cave", "beach", "mountain", "pond" ]
Key fact: shelter is used for protection by animals against weather
A
0
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
During the time the moon orbits the Earth 13 times the Earth orbits the sun
[ "once", "twice", "three times", "four times" ]
Key fact: the moon orbiting the Earth approximately occurs 13 times per year
A
0
openbookqa
an object database or object - oriented database is a database management system in which information is represented in the form of objects as used in object - oriented programming. object databases are different from relational databases which are table - oriented. a third type, objectrelational databases, is a hybrid of both approaches. object databases have been considered since the early 1980s. overview object - oriented database management systems ( oodbmss ) also called odbms ( object database management system ) combine database capabilities with object - oriented programming language capabilities. oodbmss allow object - oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the oodbms. because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the oodbms and the programming language will use the same model of representation. relational dbms projects, by way of contrast, maintain a clearer division between the database model and the application. as the usage of web - based technology increases with the implementation of intranets and extranets, companies have a vested interest in oodbmss to display their complex data. using a dbms that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer - aided design ( cad ). some object - oriented databases are designed to work well with object - oriented programming languages such as delphi, ruby, python
a triplestore or rdf store is a purpose - built database for the storage and retrieval of triples through semantic queries. a triple is a data entity composed of subjectpredicateobject, like " bob is 35 " ( i. e., bob's age measured in years is 35 ) or " bob knows fred ". much like a relational database, information in a triplestore is stored and retrieved via a query language. unlike a relational database, a triplestore is optimized for the storage and retrieval of triples. in addition to queries, triples can usually be imported and exported using the resource description framework ( rdf ) and other formats. implementations some triplestores have been built as database engines from scratch, while others have been built on top of existing commercial relational database engines ( such as sql - based ) or nosql document - oriented database engines. like the early development of online analytical processing ( olap ) databases, this intermediate approach allowed large and powerful database engines to be constructed for little programming effort in the initial phases of triplestore development. a difficulty with implementing triplestores over sql is that although " triples " may thus be " stored ", implementing efficient querying of a graph - based rdf model ( such as mapping from sparql ) onto sql queries is difficult. related database types adding a name to the triple makes a " quad store " or named graph. a graph database has a
structured query language ( sql ) ( pronounced s - q - l ; or alternatively as " sequel " ) is a domain - specific language used to manage data, especially in a relational database management system ( rdbms ). it is particularly useful in handling structured data, i. e., data incorporating relations among entities and variables. introduced in the 1970s, sql offered two main advantages over older readwrite apis such as isam or vsam. firstly, it introduced the concept of accessing many records with one single command. secondly, it eliminates the need to specify how to reach a record, i. e., with or without an index. originally based upon relational algebra and tuple relational calculus, sql consists of many types of statements, which may be informally classed as sublanguages, commonly : data query language ( dql ), data definition language ( ddl ), data control language ( dcl ), and data manipulation language ( dml ). the scope of sql includes data query, data manipulation ( insert, update, and delete ), data definition ( schema creation and modification ), and data access control. although sql is essentially a declarative language ( 4gl ), it also includes procedural elements. sql was one of the first commercial languages to use edgar f. codd's relational model. the model was described in his influential 1970 paper, " a relational model of data for large shared data banks ". despite not entirely ad
if a tree stands firm during a windy day, which of these could be holding it in place?
[ "the leaves on the branches", "the sap in the tree", "the branches on the stem", "the roots in the ground" ]
Key fact: roots anchor plants into the soil
D
3
openbookqa
in computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. each node in the tree can be connected to many children ( depending on the type of tree ), but must be connected to exactly one parent, except for the root node, which has no parent ( i. e., the root node as the top - most node in the tree hierarchy ). these constraints mean there are no cycles or " loops " ( no node can be its own ancestor ), and also that each child can be treated like the root node of its own subtree, making recursion a useful technique for tree traversal. in contrast to linear data structures, many trees cannot be represented by relationships between neighboring nodes ( parent and children nodes of a node under consideration, if they exist ) in a single straight line ( called edge or link between two adjacent nodes ). binary trees are a commonly used type, which constrain the number of children for each parent to at most two. when the order of the children is specified, this data structure corresponds to an ordered tree in graph theory. a value or pointer to other data may be associated with every node in the tree, or sometimes only with the leaf nodes, which have no children nodes. the abstract data type ( adt ) can be represented in a number of ways, including a list of parents with pointers to children, a list of children with pointers to parents, or
a tree structure, tree diagram, or tree model is a way of representing the hierarchical nature of a structure in a graphical form. it is named a " tree structure " because the classic representation resembles a tree, although the chart is generally upside down compared to a biological tree, with the " stem " at the top and the " leaves " at the bottom. a tree structure is conceptual, and appears in several forms. for a discussion of tree structures in specific fields, see tree ( data structure ) for computer science ; insofar as it relates to graph theory, see tree ( graph theory ) or tree ( set theory ). other related articles are listed below. terminology and properties the tree elements are called " nodes ". the lines connecting elements are called " branches ". nodes without children are called leaf nodes, " end - nodes ", or " leaves ". every finite tree structure has a member that has no superior. this member is called the " root " or root node. the root is the starting node. but the converse is not true : infinite tree structures may or may not have a root node. the names of relationships between nodes model the kinship terminology of family relations. the gender - neutral names " parent " and " child " have largely displaced the older " father " and " son " terminology. the term " uncle " is still widely used for other nodes at the same level as the parent, although it is sometimes replaced with gender - neutral terms like " ommer "
the tree elements are called " nodes ". the lines connecting elements are called " branches ". nodes without children are called leaf nodes, " end - nodes ", or " leaves ".
A person wants to dry pips from sunflowers and then can plant those pips knowing that they have enough
[ "financing", "nutriment", "grain", "solar wind" ]
Key fact: a seed is used for storing food for a new plant
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a cost database is a computerized database of cost estimating information, which is normally used with construction estimating software to support the formation of cost estimates. a cost database may also simply be an electronic reference of cost data. overview a cost database includes the electronic equivalent of a cost book, or cost reference book, a tool used by estimators for many years. cost books may be internal records at a particular company or agency, or they may be commercially published books on the open market. aec teams and federal agencies can and often do collect internally sourced data from their own specialists, vendors, and partners. this is valuable personalized cost data that is captured but often doesn't cover the same range that commercial cost book data can. internally sourced data is difficult to maintain and do not have the same level of developed user interface or functionalities as a commercial product. the cost database may be stored in relational database management system, which may be in either an open or proprietary format, serving the data to the cost estimating software. the cost database may be hosted in the cloud. estimators use a cost database to store data in structured way which is easy to manage and retrieve. details costing data the most basic element of a cost estimate and therefore the cost database is the estimate line item or work item. an example is " concrete, 4000 psi ( 30 mpa ), " which is the description of the item. in the cost database, an item is a row or record in
solarsoft is a collaborative software development system created at lockheed - martin to support solar data analysis and spacecraft operation activities. it is widely recognized in the solar physics community as having revolutionized solar data analysis starting in the early 1990s. solarsoft is in active development and use by research groups on all seven continents. solarsoft is a store - and - forward system that makes use of rsync, csh and other unix tools to distribute the software to a wide variety of platforms. solarsoft predates cvs and most other collaborative development systems ; hence, it does not provide direct support for many features that today would be considered necessary, such as software versioning. the use of solarsoft has grown to include calibration data and even complete catalog indices for some instruments, as well as the scientific software. most of the software in the solarsoft tree pertains to either solar data analysis or specific space missions or observatories such as yohkoh or soho. the vast majority is written in idl, the most commonly used analysis platform in the solar physics community, though some c, ana, and pdl modules are also available. external links solarsoft @ lmsal solarsoft @ nasa
A mother births what?
[ "light", "elements", "younglings", "nothing" ]
Key fact: a mother births offspring
C
2
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
Photosynthesis features
[ "single celled organisms", "humans", "cats", "fish" ]
Key fact: a leaf performs photosynthesis
A
0
openbookqa
biological organization exists at all levels in organisms. it can be seen at the smallest level, in the molecules that make up such compounds as dna and proteins, to the largest level, in an organism such as a blue whale, the largest mammal on earth. similarly, single celled prokaryotes and eukaryotes show order in the way their cells are arranged. single - celled organisms such as an amoeba are free - floating and independent - living. their single - celled " bodies " are able to carry out all the processes of life such as metabolism and respiration without help from other cells.
a taxonomic database is a database created to hold information on biological taxa for example groups of organisms organized by species name or other taxonomic identifier for efficient data management and information retrieval. taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online ; to underpin the operation of web - based species information systems ; as a part of biological collection management ( for example in museums and herbaria ) ; as well as providing, in some cases, the taxon management component of broader science or biology information systems. they are also a fundamental contribution to the discipline of biodiversity informatics. goals taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example : beetles in a defined region, mammals globally, or all described taxa in the tree of life. a taxonomic database may incorporate organism identifiers ( scientific name, author, and for zoological taxa year of original publication ), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon ( such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc. ). some databases, such as the global biodiversity information facility ( gbif ) database and the barcode of life data system, store the dna barcode of a taxon if one exists ( also called the barcode index number ( bin ) which may
the animal genome size database is a catalogue of published genome size estimates for vertebrate and invertebrate animals. it was created in 2001 by dr. t. ryan gregory of the university of guelph in canada. as of september 2005, the database contains data for over 4, 000 species of animals. a similar database, the plant dna c - values database ( c - value being analogous to genome size in diploid organisms ) was created by researchers at the royal botanic gardens, kew, in 1997. see also list of organisms by chromosome count references external links animal genome size database plant dna c - values database fungal genome size database cell size database
Which likely occurs in the digestive system?
[ "twinkies are converted to usable material", "air comes in and out", "twinkies are baked fresh", "plastic is found here" ]
Key fact: the breaking down of food into simple substances occurs in the digestive system
A
0
openbookqa
the cookie turns out to be a plain one. how probable is it that fred picked it out of bowl # 1? intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl # 1.
what is this strange - looking object? can you guess what it is? it ’ s a model of a certain type of matter. some types of matter are elements, or pure substances that cannot be broken down into simpler substances. many other types of matter are compounds. the model above represents a compound. the compound it represents is carbon dioxide, a gas you exhale each time you breathe.
starches are complex carbohydrates that are polymers of glucose. starches are used by plants to store energy. consumers get starches by eating plants. they break down the starches to sugar for energy.
Bananas, pineapples, and coconuts are from places that are
[ "warm", "islands", "wet", "far away" ]
Key fact: warm-weather organisms live in warm climates
A
0
openbookqa
integrated surface database ( isd ) is global database compiled by the national oceanic and atmospheric administration ( noaa ) and the national centers for environmental information ( ncei ) comprising hourly and synoptic surface observations compiled globally from ~ 35, 500 weather stations ; it is updated, automatically, hourly. the data largely date back to paper records which were keyed in by hand from'60s and'70s ( and in some cases, weather observations from over one hundred years ago ). it was developed by the joint federal climate complex project in asheville, north carolina. = = references = =
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. this effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources. on the other hand, in lav, the source database is modeled as a set of views over g { \ displaystyle g }.
This type of energy resource often results in particulates that are very toxic to breathe:
[ "wood", "coal", "oil", "solar" ]
Key fact: coal is used to produce electricity by burning in coal-fire power stations
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
a crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. they are characterized by symmetry, morphology, and directionally dependent physical properties. a crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. ( molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in x - ray, neutron, and electron diffraction based crystallography ). crystal structures of crystalline material are typically determined from x - ray or neutron single - crystal diffraction data and stored in crystal structure databases. they are routinely identified by comparing reflection intensities and lattice spacings from x - ray powder diffraction data with entries in powder - diffraction fingerprinting databases. crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single - crystal electron diffraction data or structure factor amplitude and phase angle information from fourier transforms of hrtem images of crystallites. they are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice - fringe fingerprint plots with entries in a lattice - fringe fingerprinting database. crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. many provide structure visualization capabilities. they can be browser based or
the chemical database service is an epsrc - funded mid - range facility that provides uk academic institutions with access to a number of chemical databases. it has been hosted by the royal society of chemistry since 2013, before which it was hosted by daresbury laboratory ( part of the science and technology facilities council ). currently, the included databases are : acd / i - lab, a tool for prediction of physicochemical properties and nmr spectra from a chemical structure available chemicals directory, a structure - searchable database of commercially available chemicals cambridge structural database ( csd ), a crystallographic database of organic and organometallic structures inorganic crystal structure database ( icsd ), a crystallographic database of inorganic structures crystalworks, a database combining data from csd, icsd and crystmet detherm, a database of thermophysical data for chemical compounds and mixtures spresiweb, a database of organic compounds and reactions = = references = =
A human who goes long periods of time without nourishment will experience
[ "Starvation", "Perspiration", "Fullness", "happiness" ]
Key fact: lack of food causes starvation
A
0
openbookqa
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
the ki database ( or ki db ) is a public domain database of published binding affinities ( ki ) of drugs and chemical compounds for receptors, neurotransmitter transporters, ion channels, and enzymes. the resource is maintained by the university of north carolina at chapel hill and is funded by the nimh psychoactive drug screening program and by a gift from the heffter research institute. as of april 2010, the database had data for 7 449 compounds at 738 different receptors and, as of 27 april 2018, 67 696 ki values. the ki database has data useful for both chemical biology and chemogenetics. external links description search form bindingdb. org - a similar publicly available database
mental health informatics is a branch of health or clinical informatics focused on the use of information technology ( it ) and information to improve mental health. like health informatics, mental health informatics is a multidisciplinary field that promotes care delivery, research and education as well as the technology and methodologies required to implement it. metrics and coding terminology and coding systems such as the ( dsm ) specific mental health assessment and diagnostic systems data collection and storage systems systematic collection of information is fundamental to successful practices. collecting data useful for mental illness diagnosis and treatment is challenging, as we lack quantitative biomarkers that might be used in standard health informatics, such as body temperature or blood pressure. largely, current diagnosis and treatment is driven by clinical interviews between professionals and patients. interviews are not only difficult to draw standardized data from because of diverse individual experience, condition, and accuracy of a patient's memory. rapid advancements in computation and storage systems have the potential to transform this data collection process. for example, a 2014 study in ireland explored the use of a smartphone application to record daily mood and thoughts. such a collection process would provide plentiful standardized data less afflicted by patient recollection issues. integration of mental health function into electronic health record systems ( ehrs ) and larger organisational systems mobile and digital sensors the ubiquity of smartphones and other mobile computing platforms is beginning to enable new types of data collection. recent work has pioneered the use of
If a thing is going to be a planet, it must orbit in a certain amount of time, which can exclude
[ "mercury", "the ninth planet", "the third planet", "venus" ]
Key fact: pluto has not cleared its orbit
B
1
openbookqa
a graph database ( gdb ) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. a key concept of the system is the graph ( or edge or relationship ). the graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. the relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. graph databases hold the relationships between data as a priority. querying relationships is fast because they are perpetually stored in the database. relationships can be intuitively visualized using graph databases, making them useful for heavily inter - connected data. graph databases are commonly referred to as a nosql database. graph databases are similar to 1970s network model databases in that both represent general graphs, but network - model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges. the underlying storage mechanism of graph databases can vary. relationships are first - class citizens in a graph database and can be labelled, directed, and given properties. some depend on a relational engine and store the graph data in a table ( although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices ). others use a keyvalue store or document - oriented database for storage, making them inherently nosql structures. as of 2021, no graph query
nevertheless, the parameters of the second planet are still highly uncertain. on the other hand, the catalog of nearby exoplanets gives a period of 2, 190 days, which would put the planets close to a 2 : 1 ratio of orbital periods, though the reference for these parameters is uncertain : the original fischer et al. paper is cited as a reference in spite of the fact that it gives different parameters, though this solution has been adopted by the extrasolar planets encyclopaedia. in 2010, the discovery of a third planet ( 47 uma d ) was made by using the bayesian kepler periodogram. using this model of this planetary system it was determined that it is 100, 000 times more likely to have three planets than two planets.
astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, machine learning, informatics, and information / communications technologies. the field is closely related to astrostatistics. data - driven astronomy ( dda ) refers to the use of data science in astronomy. several outputs of telescopic observations and sky surveys are taken into consideration and approaches related to data mining and big data management are used to analyze, filter, and normalize the data set that are further used for making classifications, predictions, and anomaly detections by advanced statistical approaches, digital image processing and machine learning. the output of these processes is used by astronomers and space scientists to study and identify patterns, anomalies, and movements in outer space and conclude theories and discoveries in the cosmos. background astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data - oriented astronomy. early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical virtual observatory initiatives. further development of the field, along with astronomy community endorsement, was presented to the national research council ( united states ) in 2009 in the astroinformatics " state of the profession " position paper for the 2010 astronomy and astrophysics decadal survey. that position paper provided the basis for the subsequent more detailed
Animals are just like humans in that if they run out of oxygen, breathing is impossible and
[ "They will perish", "they will type.", "they will program", "they will Laugh" ]
Key fact: an animal requires oxygen for to breathe
A
0
openbookqa
an approach to analysis of humor is classification of jokes. a further step is an attempt to generate jokes basing on the rules that underlie classification. simple prototypes for computer pun generation were reported in the early 1990s, based on a natural language generator program, vinci. graeme ritchie and kim binsted in their 1994 research paper described a computer program, jape, designed to generate question - answer - type puns from a general, i. e., non - humorous, lexicon.
a conversational user interface ( cui ) is a user interface for computers that emulates a conversation with a real human. historically, computers have relied on text - based user interfaces and graphical user interfaces ( guis ) ( such as the user pressing a " back " button ) to translate the user's desired action into commands the computer understands. while an effective mechanism of completing computing actions, there is a learning curve for the user associated with gui. instead, cuis provide opportunity for the user to communicate with the computer in their natural language rather than in a syntax specific commands. to do this, conversational interfaces use natural language processing ( nlp ) to allow computers to understand, analyze, and create meaning from human language. unlike word processors, nlp considers the structure of human language ( i. e., words make phrases ; phrases make sentences which convey the idea or intent the user is trying to invoke ). the ambiguous nature of human language makes it difficult for a machine to always correctly interpret the user's requests, which is why we have seen a shift toward natural - language understanding ( nlu ). nlu allows for sentiment analysis and conversational searches which allows a line of questioning to continue, with the context carried throughout the conversation. nlu allows conversational interfaces to handle unstructured inputs that the human brain is able to understand such as spelling mistakes of follow - up questions. for example, through leveraging nlu, a user could first ask for
a program written in assembly language consists of a series of mnemonic processor instructions and meta - statements ( known variously as declarative operations, directives, pseudo - instructions, pseudo - operations and pseudo - ops ), comments and data. assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. some instructions may be " implied, " which means the data upon which the instruction operates is implicitly defined by the instruction itself — such an instruction does not take an operand. the resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
a student notices a large number of hawks in the playground, what will likely happen to the lizards?
[ "they will flourish and thrive", "their kind will dwindle", "they will become predators", "all of these" ]
Key fact: hawks eat lizards
B
1
openbookqa
phylomedb is a public biological database for complete catalogs of gene phylogenies ( phylomes ). it allows users to interactively explore the evolutionary history of genes through the visualization of phylogenetic trees and multiple sequence alignments. moreover, phylomedb provides genome - wide orthology and paralogy predictions which are based on the analysis of the phylogenetic trees. the automated pipeline used to reconstruct trees aims at providing a high - quality phylogenetic analysis of different genomes, including maximum likelihood tree inference, alignment trimming and evolutionary model testing. phylomedb includes also a public download section with the complete set of trees, alignments and orthology predictions, as well as a web api that facilitates cross linking trees from external sources. finally, phylomedb provides an advanced tree visualization interface based on the ete toolkit, which integrates tree topologies, taxonomic information, domain mapping and alignment visualization in a single and interactive tree image. new steps on phylomedb the tree searching engine of phylomedb was updated to provide a gene - centric view of all phylomedb resources. thus, after a protein or gene search, all the available trees in phylomedb are listed and organized by phylome and tree type. users can switch among all available seed and collateral trees without missing the focus on the searched protein or gene. in phylomedb v4 all the information available for each tree
a statistical database is a database used for statistical analysis purposes. it is an olap ( online analytical processing ), instead of oltp ( online transaction processing ) system. modern decision, and classical statistical databases are often closer to the relational model than the multidimensional model commonly used in olap systems today. statistical databases typically contain parameter data and the measured data for these parameters. for example, parameter data consists of the different values for varying conditions in an experiment ( e. g., temperature, time ). the measured data ( or variables ) are the measurements taken in the experiment under these varying conditions. many statistical databases are sparse with many null or zero values. it is not uncommon for a statistical database to be 40 % to 50 % sparse. there are two options for dealing with the sparseness : ( 1 ) leave the null values in there and use compression techniques to squeeze them out or ( 2 ) remove the entries that only have null values. statistical databases often incorporate support for advanced statistical analysis techniques, such as correlations, which go beyond sql. they also pose unique security concerns, which were the focus of much research, particularly in the late 1970s and early to mid - 1980s. privacy in statistical databases in a statistical database, it is often desired to allow query access only to aggregate data, not individual records. securing such a database is a difficult problem, since intelligent users can use a combination of aggregate queries to derive information about a single individual. some common approaches are :
in bioinformatics, a gene disease database is a systematized collection of data, typically structured to model aspects of reality, in a way to comprehend the underlying mechanisms of complex diseases, by understanding multiple composite interactions between phenotype - genotype relationships and gene - disease mechanisms. gene disease databases integrate human gene - disease associations from various expert curated databases and text mining derived associations including mendelian, complex and environmental diseases. introduction experts in different areas of biology and bioinformatics have been trying to comprehend the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. for some illnesses, it has become apparent that it is the right amount of animosity is made for not enough to obtain an index of the disease - related genes but to uncover how disruptions of molecular grids in the cell give rise to disease phenotypes. moreover, even with the unprecedented wealth of information available, obtaining such catalogues is extremely difficult. genetic broadly speaking, genetic diseases are caused by aberrations in genes or chromosomes. many genetic diseases are developed from before birth. genetic disorders account for a significant number of the health care problems in our society. advances in the understanding of this diseases have increased both the life span and quality of life for many of those affected by genetic disorders. recent developments in bioinformatics and laboratory genetics have made possible the better delineation of certain malformation and mental retardation syndromes, so that their mode of inheritance
When you increase the viability of food, you increase the ability to
[ "waste it", "modify it", "disperse it", "displace it" ]
Key fact: as ability to preserve food increases , the ability to transport food increases
C
2
openbookqa
a database dump contains a record of the table structure and / or the data from a database and is usually in the form of a list of sql statements ( " sql dump " ). a database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. corrupted databases can often be recovered by analysis of the dump. database dumps are often published by free content projects, to facilitate reuse, forking, offline use, and long - term digital preservation. dumps can be transported into environments with internet blackouts or otherwise restricted internet access, as well as facilitate local searching of the database using sophisticated tools such as grep. see also import and export of data core dump databases database management system sqlyog - mysql gui tool to generate database dump data portability external links mysqldump a database backup program postgresql dump backup methods, for postgresql databases.
a flat - file database is a database stored in a file called a flat file. records follow a uniform format, and there are no structures for indexing or recognizing relationships between records. the file is simple. a flat file can be a plain text file ( e. g. csv, txt or tsv ), or a binary file. relationships can be inferred from the data in the database, but the database format itself does not make those relationships explicit. the term has generally implied a small database, but very large databases can also be flat. overview plain text files usually contain one record per line. examples of flat files include / etc / passwd and / etc / group on unix - like operating systems. another example of a flat file is a name - and - address list with the fields name, address and phone number. flat files are typically either delimiter - separated or fixed - width. delimiter - separated values in delimiter - separated values files, the fields are separated by a character or string called the delimiter. common variants are comma - separated values ( csv ) where the delimiter is a comma, tab - separated values ( tsv ) where the delimiter is the tab character ), space - separated values and vertical - bar - separated values ( delimiter is | ). if the delimiter is allowed inside a field, there needs to be a way to distinguish delimiters characters or
the problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. the problem asks about how we can " repair " an input relational database in order to make it satisfy integrity constraints. the goal of the problem is to be able to work with data that is " dirty ", i. e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i. e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. several variations of the problem exist, depending on : what we intend to figure out about the dirty data : figuring out if some database tuple is certain ( i. e., is in every repaired database ), figuring out if some query answer is certain ( i. e., the answer is returned when evaluating the query on every repaired database ) which kinds of ways are allowed to repair the database : can we insert new facts, remove facts ( so - called subset repairs ), and so on which repaired databases do we study : those where we only change a minimal subset of the database tuples ( e. g., minimal subset repairs ), those where we only change a minimal number of database tuples ( e. g., minimal cardinality repairs ) the problem of database repair has been studied to understand what is the complexity of these different problem variants, i. e.,