Rabu, 13 Juni 2018

Sponsored Links

Database Security at Cloud Scale â€
src: www.imperva.com

The database is an organized data set. Relational databases, more limited, are a collection of schemes, tables, questions, reports, views, and other elements. Database designers usually organize data to model aspects of reality in ways that support processes that require information, such as (for example) modeling the availability of rooms in a hotel in a way that supports finding hotels with vacancies.

database management system ( DBMS ) is a computer software application that interacts with end users, other apps, and the database itself to capture and analyze data. The general purpose DBMS allows the definition, creation, query, updates, and administration of the database.

Databases are not usually portable in different DBMS, but different DBMSs can operate using standards such as SQL and ODBC or JDBC to allow one application to work with more than one DBMS. Computer scientists can classify database management systems according to the database model they support; the most popular database system since the 1980s all support the relational model - generally related to SQL language. Sometimes the DBMS is loosely referred to as a "database".

Video Database



Terminology and overview

Formally, "database" refers to a set of related data and how it is organized. Access to this data is usually provided by a "database management system" (DBMS) comprising a set of integrated computer software that allows users to interact with one or more databases and provide access to all data contained in the database (although restrictions may exist which limits access to certain data). The DBMS provides a variety of functions that enable the entry, storage, and retrieval of large amounts of information and provide a way to manage how the information is organized.

Because of the close relationship between them, the term "database" is often used casually to refer to the database and DBMS used to manipulate it.

Outside of the world of professional information technology, the term database is often used to refer to related data sets (such as spreadsheets or card indexes). This article deals only with databases where size and usage requirements require the use of database management systems.

There are DBMSs providing various functions that enable database management and data that can be classified into four major functional groups:

  • Data definition - Creation, modification, and deletion of definitions that define the organization of data.
  • Updates - The actual insertion, modification, and deletion of data.
  • Fetch - Provides information in a form that can be used directly or for further processing by other apps. The captured data can be provided in a form that is essentially the same as that stored in the database or in a new form obtained by converting or aggregating the existing data from the database.
  • Administration - Enroll and monitor users, enforce data security, monitor performance, maintain data integrity, deal with concurrency controls, and recover information that has been damaged by events such as unexpected system failures.

Both the database and the DBMS comply with the principles of the particular database model. "Database system" refers collectively to the database model, database management system, and database.

Physically, the database server is a dedicated computer that stores the actual database and runs only DBMS and related software. The database server is usually a multiprocessor computer, with generous memory and RAID disk array used for stable storage. RAID is used to recover data if one of the disks fails. Hardware database accelerator, connected to one or more servers through high-speed channels, is also used in large volume transaction processing environments. DBMS is found in the heart of most database applications. The DBMS can be built around a special multitasking kernel with built-in network support, but modern DBMS usually relies on standard operating systems to provide these functions.

Because the DBMS is made up of significant markets, computer and storage vendors often take into account DBMS requirements in their own development plans.

Database and DBMS can be categorized according to the database model they support (such as relational or XML), the type of computer it runs (from server cluster to mobile phone), query language (s) used to access database (such as SQL or XQuery) their internal, which affects performance, scalability, robustness, and security.

Maps Database



Apps

The database is used to support the organization's internal operations and to support online interaction with customers and suppliers (see Enterprise software).

The database is used to store more specific administrative and data information, such as technical data or economic models. Examples of database applications include computerized library systems, flight reservation systems, computerized component inventory systems, and many content management systems that store websites as a collection of web pages in the database.

Database Management System Stock Photos. Royalty Free Database ...
src: previews.123rf.com


Generic and custom destination DBMS

DBMS can be a complex software system and its development usually takes thousands of years of human development effort. Some common destination DBMS such as Adabas, Oracle and DB2 have been enhanced since the 1970s. The general purpose of DBMS aims to meet the needs of as many applications as possible, which adds complexity. However, because their development costs can be deployed to a large number of users, they are often the most cost-effective approach. On the other hand, general purpose DBMS can introduce unnecessary overhead. Therefore, many systems use special purpose DBMS. A common example is an email system that performs many functions of a general purpose DBMS such as insertion and deletion of messages consisting of various data items or linking messages with a specific email address; but these functions are limited to what is required to handle email and do not provide users with all the functions that will be available using a general purpose DBMS.

The application software can often access the database on behalf of the end user, without exposing the DBMS interface directly. The application programmer can use the wire protocol directly, or more likely through the application programming interface. Database designers and database administrators interact with the DBMS through a dedicated interface to build and maintain the application database, and thus require more knowledge and understanding of how DBMS operates and the DBMS's external interface and adjustment parameters.

How To Create SQL Database in Web Hosting Cpanel - Complete Tutorial
src: www.stackdart.com


History

The size, capability, and performance of the database and each DBMSs have grown in large numbers. This performance improvement is made possible by technological advances in the areas of processor, computer memory, computer storage, and computer networks. Development of database technology can be divided into three eras based on data model or structure: navigation, SQL/relational, and post-relational.

The two main models of initial data navigation are hierarchical models, exemplified by the IBM IMS system, and the CODASYL model (network model), implemented in a number of products such as IDMS.

The relational model, first proposed in 1970 by Edgar F. Codd, departs from this tradition by insisting that the app should look for data based on the content, not by following links. The relational model uses a general ledger table set, each of which is used for different entity types. Only in the mid-1980s computer hardware became powerful enough to allow the deployment of relational systems (DBMSs plus applications). In the early 1990s, however, relational systems dominated in all large-scale data-processing applications, and by 2018 they remained dominant: IBM DB2, Oracle, MySQL, and Microsoft SQL Server were the top DBMS. The dominant database language, the SQL standard for the relational model, has influenced the database language for other data models.

Database objects were developed in the 1980s to address the discomfort of object-relational impedance mismatches, which led to the coining of the term "post-relational" as well as the development of object-relational hybrid databases.

The next generation of post-relational databases in the late 2000s were known as NoSQL databases, introducing fast key-value stores and document oriented databases. A competing "successor" known as a NewSQL database tries a new implementation that maintains a relational/SQL model while aiming to match NoSQL's high performance compared to the commercially available DBMS relay.

1960, DBMS navigation

The introduction of the term database coincided with the availability of direct access storage (disks and drums) from the mid-1960s onwards. The term represents a contrast to the tape-based system in the past, allowing interactive shared use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by System Development Corporation of California as the first to use the term "database" in a particular technical sense.

As computers grow in speed and ability, a number of general-purpose database systems emerge; in the mid-1960s a number of such systems began to be used commercially. Interest in standards began to grow, and Charles Bachman, author of one such product, Integrated Data Store (IDS), established the "Database Task Group" in CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standards, commonly known as the "CODASYL approach", and soon a number of commercial products based on this approach entered the market.

The CODASYL approach relies on "manual" navigation from related data sets formed into large networks. The app can find a recording with one of three methods:

  1. The use of primary keys (known as CALC keys, usually implemented with hashing)
  2. Navigate the relationship (called set) from one note to another
  3. Scans all recordings in sequential order

Then the system adds a B-tree to provide an alternative access point. Many CODASYL databases also add a very simple query language. However, in the final calculation, CODASYL is very complex and requires significant training and effort to produce useful applications.

IBM also had their own DBMS in 1966, known as the Information Management System (IMS). IMS is a software development written for the Apollo program on System/360. IMS is generally similar in concept to CODASYL, but uses a strict hierarchy for data navigation models, not the CODASYL network model. The two concepts then became known as the navigation database because of the way the data was accessed, and the Bachman's 1973 Turing Award presentation was Programmer as Navigator . IMS is classified as a hierarchical database. IDMS and TOTAL databases from Cincom Systems are classified as network databases. IMS remains in use by 2014.

1970s, relational DBMS

Edgar Codd worked at IBM in San Jose, California, in one of their branch offices primarily involved in the development of hard disk systems. He was unhappy with CODASYL's approach to navigation, especially the lack of a "search" facility. In 1970, he wrote a number of papers outlining a new approach to database development that eventually culminated in the breakthrough of the Data Relational Model for Large Joint Data Banks.

In this paper, he describes a new system for storing and working with large databases. Instead of records stored in some sort of related list of free form notes as in CODASYL, Codd's idea is to use "table" records with a fixed length, with each table being used for different entity types. The linked-list system will be very inefficient when storing a "rare" database where some data for a single record can be left blank. The relational model solves this problem by dividing the data into a series of normalized tables (or relations ), with optional elements being moved from the main table to where they will take up space only if necessary. Data can be freely inserted, deleted and edited in this table, with the DBMS performing any maintenance required to present the table view to the application/user.

The relational model also allows the database content to evolve without constantly rewriting links and pointers. The relational sections come from entities that refer other entities in what are known as one-to-many relationships, such as traditional hierarchical models, and many-to-many relationships, such as the navigation model (network). Thus, the relational model can express the hierarchical and navigational model, as well as the original tabular model, allowing pure or combined modeling in these three models, as required by the application.

For example, the general use of database systems is to track information about users, their names, login information, various addresses and phone numbers. In the navigation approach, all these data will be placed in one recording, and unused items will not be placed in the database. In a relational approach, data will be normalized into user tables, address tables and phone number tables (for example). Records will be made in this optional table only if the address or phone number is actually provided.

Reconnecting information is the key to this system. In a relational model, some information is used as a "key", which uniquely defines a particular record. When information is collected about the user, the information stored in the optional table will be found by searching for this key. For example, if a user's login name is unique, the address and phone number for that user will be logged with the login name as the key. This simple "reconnect" of data linked back to a single collection is something that a traditional computer language was not designed for.

Just as the navigation approach will require a program for loops to collect records, the relational approach will require a loop to collect information about each one note. Codd's suggestion is a language-oriented setting, which then spawns the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, it shows that such a system can support all operations from a normal database (inserting, updating etc.) as well as providing a simple system to find and restore set data in a single operation.

The Codd paper was taken by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that has been allocated to geographic database projects and student programmers to generate code. Beginning in 1973, INGRES delivered its first test product that was generally ready for widespread use in 1979. INGRES is similar to System R in several ways, including the use of "language" for data access, known as QUEL. Over time, INGRES moves to the emerging SQL standard.

IBM itself conducted a test of relational model implementation, PRTV, and one production, Business System 12, both of which are now discontinued. Honeywell writes MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rails. Most other DBMS implementations commonly called relational are actually SQL DBMS.

In 1970, the University of Michigan began the development of the MICRO Information Management System based on D.L. Set-Theoretic Childs' Model. MICRO is used to manage a huge data set by the US Department of Labor, US Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It runs on an IBM mainframe computer using the Michigan Terminal System. The system remained in production until 1998.

Integrated approach

In the 1970s and 1980s, efforts were made to build database systems with integrated hardware and software. The underlying philosophy is that such integration will provide higher performance at a lower cost. Examples are IBM System/38, Teradata's initial offer, and database engine Britton Lee, Inc.

Another approach to hardware support for database management is the CAFS ICL accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts are generally unsuccessful because specialized database engines can not keep up with the rapid development and advancement of general purpose computers. So most of today's database systems are software systems that run on general purpose hardware, using general-purpose computer data storage. But this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).

The late 1970s, SQL DBMS

IBM began to work on a loosely prototype system based on Codd concepts as a R System in the early 1970s. The first version was ready in 1974/5, and then work started on a multi-table system where data can be shared so all data for records (some of which are optional) should not be stored in one big "snippet". The multi-user version was subsequently tested by customers in 1978 and 1979, by which time standard SQL - query language was added. Codd's ideas built themselves as workable and superior to CODASYL, prompted IBM to develop the actual production version of System R, known as SQL/DS , and, later, Database 2 (DB2).

Larry Ellison's Oracle Database (or simpler, Oracle) started from a different chain, based on IBM's paper on System R. Although the Oracle V1 implementation was completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979.

Stonebraker went on to apply lessons from INGRES to develop a new database, Postgres, now known as PostgreSQL. PostgreSQL is often used for critical global mission applications (domain registrars.org and.info use it as their primary data store, as many large corporations and financial institutions do).

In Sweden, Codd's paper is also read and SQL Mimer was developed from the mid-1970s at Uppsala University. In 1984, the project was consolidated into an independent company. In the early 1980s, Mimer introduced transaction handling for high endurance in applications, an idea that was later implemented in most other DBMS.

Another data model, the entity-relationship model, emerged in 1976 and gained popularity for database design because it emphasizes a more familiar description than the previous relational model. Then, the entity-relationship construct is installed as a modeling model constructing for the relational model, and the distinction between the two has become irrelevant.

1980s, on desktop

The 1980s ushered in the age of desktop computers. New computers empower their users with spreadsheets such as Lotus 1-2-3 and database software such as dBASE. The dBASE product is lightweight and easy for computer users to understand out of the box. C. Wayne Ratliff the creator of dBASE states: "dBASE is different from programs like BASIC, C, FORTRAN, and COBOL because a lot of dirty work has been done.Data manipulation is done by dBASE instead of by user, so the user can concentrate on what he does, rather than having to mess up dirty details of opening, reading, and closing files, and managing space allocations. "dBASE was one of the best-selling software titles in the 1980s and early 1990s.

1990, object-oriented

The 1990s, along with improved object-oriented programming, saw growth in how data in various databases were handled. Programmers and designers begin treating the data in their database as objects. This means that if someone's data is in the database, the person's attributes, such as his address, phone number, and age, are now considered to belong to that person instead of foreign data. This allows the relationship between data to be a relation with the object and its attributes and not to individual fields. The term "object-relational impedance impedance" illustrates the inconvenience of translating between a programmed object and a database table. Database objects and relational-object databases attempt to solve this problem by providing object-oriented languages ​​(sometimes as extensions to SQL) that can be used by programmers as an alternative to pure SQL relational. On the programming side, libraries known as object-relational mapping (ORM) try to solve the same problem.

2000s, NoSQL, and NewSQL

XML databases are structured document-oriented database types that allow queries based on XML document attributes. XML databases are mostly used in enterprise database management, where XML is used as interoperable data interoperability standards. XML database management system includes commercial MarkLogic software and Oracle Berkeley DB XML, and free software using Clusterpoint Distributed XML/JSON Database. They are all enterprise database software platforms and support standard ACID-compliant industry transaction processing with strong database consistency characteristics and high database security.

NoSQL databases are often very fast, do not require a fixed table schema, avoid joining operations by storing denormalization data, and are designed for horizontal scaling. The most popular NoSQL systems include MongoDB, Couchbase, Riak, Memcached, Redis, CouchDB, Hazelcast, Apache Cassandra, and HBase, all of which are open source software products.

In recent years, there has been a high demand for massively distributed databases with high partition tolerance but according to the CAP theorem it is not possible for distributed systems to simultaneously provide consistency, availability, and tolerance of partitioning. Distributed systems can meet these two guarantees at the same time, but not all of them. Therefore, many NoSQL databases use so-called final consistency to provide availability and assurance of partition tolerance with lower data consistency levels.

NewSQL is a modern relational database class that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workload while still using SQL and maintaining ACID guarantees from traditional database systems. Such databases include Google F1/Spanner, Citus, CockroachDB, TiDB, ScaleBase, MemSQL, NuoDB, and VoltDB.

Chakra Consulting, Inc. » Database Management
src: www.chakra-consulting.com


Research

Database technology has been an active research topic since the 1960s, both in academia and in corporate research and development groups (eg IBM Research). Research activities include the theory and development of prototypes. Famous research topics include models, atomic transaction concepts, and associated concurrency control techniques, query languages ​​and query optimization methods, RAID, and more.

The database research area has several specialized academic journals (eg, ACM Transactions on Database Systems -BETIK, Data and Knowledge Techniques -DKE) and annual conferences (eg, ACM SIGMOD, PODS ACM, VLDB, IEEE ICDE).

Efficient literature search in bibliographic databases - TATR.
src: tatr.ee


Example

The first task of the database designer is to generate a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity-relationship model, often with the help of drawing tools. Another popular approach is the Unified Modeling Language. Successful data models will accurately reflect the possible state of the outside world being modeled: for example, if one can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; usually involves asking an in-depth question about matters of interest to the organization, such as "can the customer also be a supplier?", or "if a product is sold in two different forms of packaging, is it the same product or different product?", or "if an airplane flies from New York to Dubai via Frankfurt, is it one or two flights (or maybe even three)?". The answers to these questions define the terminology definitions used for entities (customers, products, airlines, aviation segments) and their relationships and attributes.

Producing conceptual data models sometimes involves input from business processes, or workflow analysis within the organization. It can help to establish what information is needed in the database, and what can be left behind. For example, this can be helpful when deciding whether the database needs to store historical data as well as current data.

After generating the user-preferred conceptual data model, the next step is to translate this to a schema that implements the relevant data structure in the database. This process is often called logical database design, and output is a logical data model expressed in schematic form. While the conceptual data model is (at least in theory) independent of the choice of database technology, the logical data model will be expressed in a particular database model supported by the selected DBMS. (The terms data model and the database model are often used interchangeably, but in this article we use the data model for a particular database design, and the database model for the modeling notation used to express the design.)

The most popular database model for general purpose databases is the relational model, or rather, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that every basic "fact" is only recorded in one place, so insertion, renewal, and deletion automatically maintain consistency.

The final stage of database design is to make decisions that affect performance, scalability, recovery, security, and the like, which depend on a particular DBMS. This is often called physical database design , and the result is a physical data model. The main purpose during this stage is data independence, which means that decisions made for performance optimization purposes should be invisible to end users and applications. There are two types of data independence: the independence of physical data and the independence of logical data. Physical design is driven primarily by performance requirements, and requires a good knowledge of expected work patterns and access patterns, and an in-depth understanding of the features offered by the selected DBMS.

Another aspect of physical database design is security. It involves both defining access controls to database objects as well as defining security levels and methods for the data itself.

Model

The database model is a type of data model that determines the logical structure of a database and basically determines how data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or relational SQL approach), which uses a table-based format.

Common logical data models for databases include:

  • Navigation database
    • Hierarchical database model
    • Network model
    • Graphical database
  • Relational model
  • Entity relationship model
    • Enhanced entity-relationship model
  • Object model
  • Document model
  • The attribute value model
  • Star scheme

The object-relational database combines two related structures.

Physical data models include:

  • Reversed Index
  • Flat files

Other models include:

  • Associative model
  • Multi-dimensional model
  • Array model
  • The multivalue model

Custom models optimized for specific data types:

  • XML database
  • Semantic model
  • Content store
  • The event store
  • Time series model

External, conceptual, and internal views

The database management system provides three database data views:

  • The external level specifies how each end user group looks at the organization of the data in the database. A single database can have a number of views at an external level.
  • The concept level brings together a variety of external views to a compatible global view. It provides a synthesis of all external views. This is beyond the scope of various end-user databases, and more appealing to database application developers and database administrators.
  • The internal level (or physical level ) is the internal organization of data within the DBMS. It deals with cost, performance, scalability and other operational matters. It deals with the layout of data storage, using storage structures such as indexes to improve performance. Sometimes it stores individual view data (embodied view), calculated from generic data, if performance justification exists for such redundancy. This balances all external performance performance requirements, perhaps in conflict, in an effort to optimize overall performance across all activities.

Although there is usually only one conceptual (or logical) and physical (or internal) view of the data, there can be a number of different external displays. This allows users to view database information in a more business-related way than from a technical, processing point of view. For example, a company's finance department requires the payment details of all employees as part of the company's expenses, but does not require details about employees who are in the interests of the human resources department. Thus different departments require different views of the company database.

The three-tier database architecture relates to the concept of data independence that is one of the main driving forces of the relational model. The idea is that changes made at some level do not affect the display at a higher level. For example, changes at the internal level do not affect an application program written using a conceptual level interface, which reduces the impact of physical changes to improve performance.

The conceptual view provides a degree of deception between internal and external. On the one hand it provides a general view of the database, independent of different external display structures, and on the other it it abstracts away the details of how data is stored or managed (internal level). In principle every level, and even every external view, can be presented by different data models. In practice, DBMS is usually given using the same data model for external and conceptual level (eg, relational model). The internal level, hidden inside the DBMS and depending on its implementation, requires different levels of detail and uses its own type of data structure.

Splitting external , concepts and internal levels is a key feature of the implementation of a relational database model that dominates 21st century databases.

Challenges of Syncing SQLite to SQL Server Databases
src: www.dbsyncstudio.com


Language

The database language is a special-purpose language, which allows one or more of the following tasks, sometimes distinguished as sub-language languages:

  • Data control language (DCL) - controls access to data;
  • Data definition language (DDL) - defines data types such as creating, changing, or dropping and relationships between them;
  • Data manipulation language (DML) - performs tasks such as inserting, updating, or deleting data events;
  • Data query language (DQL) - allows searching of information and computing derivative information.

Specific database languages ​​for specific data models. Important examples include:

  • SQL combines the role of data definition, data manipulation, and queries in one language. This is one of the first commercial languages ​​for the relational model, although it departs in some respects from the relational model as described by Codd (eg, row and column tables can be ordered). SQL became the standard of the American National Standards Institute (ANSI) in 1986, and the International Organization for Standardization (ISO) in 1987. Standards have been regularly upgraded since and supported (with varying degrees of conformity) by all major commercial DBMS relational.
  • OQL is the default object model language (from Object Data Management Group). This has influenced the design of some newer query languages ​​such as JDOQL and EJB QL.
  • XQuery is a standard XML query language applied by XML database systems such as MarkLogic and eXist, by relational databases with XML capabilities such as Oracle and DB2, as well as by in-memory XML processors such as Saxon.
  • SQL/XML combines XQuery with SQL.

The database language can also incorporate features such as:

  • DBMS specific configuration and storage engine management
  • Calculations for altering query results, such as counting, summing, averaging, sorting, categorizing, and cross referencing
  • Enforcement of constraints (eg in automotive databases, allowing only one machine type per car)
  • The version of the application programming interface of the query language, for the convenience of the programmer

CadillacDB
src: www.newcadillacdatabase.org


Storage

Database storage is a container of physical materialization of a database. It consists of internal (physical) levels in the database architecture. It also contains all the required information (eg, metadata, "data about data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Entering data into permanent storage is generally the responsibility of the a.k.a database engine. "storage engine". Although normally accessed by the DBMS through the underlying operating system (and often uses the operating system file system as an intermediary for the storage layout), storage properties and configuration settings are essential for efficient DBMS operations, and thus closely maintained by database administrators. The DBMS, while operating, always has databases located in some storage types (eg, memory and external storage). The database data and additional information required, perhaps in very large numbers, are encoded into bits. Data is typically in storage in structures that look very different from the way data is viewed at the conceptual and external levels, but in ways that seek to optimize (as well as possible) this level of reconstruction when required by users and programs, as well as to calculate additional types of information required from the data (for example, when querying the database).

Some DBMS support determines which character encoding is used to store data, so some encodings can be used in the same database.

Various low-level database storage structures are used by storage engines to create serial data models so they can be written to the media of choice. Techniques such as indexing can be used to improve performance. Conventional storage is row-oriented, but there is also a column-oriented database and correlation.

Embedded view

Often storage redundancy is used to improve performance. A common example would be to store a manifest image , which consists of frequent external views or query results. Saving such views saves them the costly computation whenever they are needed. The disadvantages of the materialized view are the costs incurred when updating them in order to stay synchronized with their original updated database data, and the cost of redundancy storage.

Replication

Sometimes the database uses redundancy storage by replicating database objects (with one or more copies) to increase data availability (both to improve performance of multiple end users that simultaneously access the same database objects, and to provide robustness in case of partial failure of distributed databases). Updates of replicated objects must be synchronized throughout the entire copy of the object. In many cases, the entire database is replicated.

Security

Database security deals with various aspects of database content protection, their owners, and their users. This ranges from protection against the unintentional use of invalid databases to unintentional database access by unauthorized entities (eg, a person or a computer program).

Access database control is concerned with controlling who (a person or a particular computer program) is allowed to access what information in the database. This information may consist of specific database objects (eg, note types, special notes, data structures), certain calculations of certain objects (for example, certain query types or queries), or take advantage of certain access points to previous ones (for example, using an index specific or other data structures to access information). Database access controls are assigned by authorized special officers (by database owners) who use a special shielded DBMS security interface.

These can be directly managed individually, or by individual assignments and privileges to the group, or (in the most complicated models) through the assignment of individuals and groups into roles that are then granted rights. Data security prevents unauthorized users viewing or updating databases. Using passwords, users are allowed access to all databases or sub-sections called "subschemas". For example, an employee data base can contain all data about individual employees, but a group of users may be allowed to only view payroll data, while others are allowed access only work history and medical data. If the DBMS provides a way to insert and update databases interactively, and interrogate them, this capability makes it possible to manage private databases.

Data security generally deals with specific data protection, either physically (eg, from corruption, or destruction, or deletion; for example, seeing physical security), or their interpretation, or part of them for meaningful information (eg, by looking at strings -the bit of the bit they created, summed up the valid credit card numbers, for example, looked at the data encryption).

Change and access record notes accessing what attributes, what changed, and when changed. The logging service enables forensic database audits later by storing event records and access changes. Sometimes app-level code is used to record changes rather than allowing this to the database. Monitoring can be set to try to detect security breaches.

Transactions and concurrency

Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from collisions. The database transaction is a work unit, usually encapsulating a number of operations through a database (eg, reading database objects, writing, obtaining keys, etc.), an abstraction supported in the database as well as other systems. Each transaction has a clear limit in terms of which program execution/code is included in the transaction (specified by the programmer of the transaction through a special transaction order).

ACID acronyms describe some ideal properties of database transactions: Atomicity, Consistency, Isolation, and Durability.

Migration

The database created with one DBMS can not be taken to another DBMS (that is, other DBMS can not run it). However, in some situations, it is desirable to move, migrate the database from one DBMS to another. The reason is mainly economical (different DBMS may have different total cost of ownership or TCO), functional, and operational (different DBMS may have different capabilities). Migration involves transforming a database from one DBMS type to another. The transformation should maintain (if possible) the associated database application (ie, all related application programs) intact. Thus, the level of conceptual and external database architecture must be maintained in the transformation. It may be desirable that also some aspects of the internal level of the architecture are maintained. The migration of complex or large databases may be a complex and costly (one-time) project by itself, which should be a factor in the decision to migrate. This is despite the fact that tools may exist to help migrate between specific DBMSs. Typically, DBMS vendors provide tools to help import databases from other popular DBMS.

Build, maintain and set

After designing the database for the application, the next step is to build the database. Typically, the appropriate general purpose DBMS can be selected for use for this purpose. A DBMS provides the user interface needed to be utilized by the database administrator to determine the data structure of the application required in the data model of each DBMS. Other user interfaces are used to select required DBMS parameters (such as associated security, storage allocation parameters, etc.).

When the database is ready (all its data structures and other required components are defined), usually filled with initial application data (database initialization, which is usually a different project, in many cases using a special DBMS interface that supports bulk insertion) before making it operational. In some cases, the database becomes operational when empty of application data, and data accumulates during its operation.

Once the database is created, initialized and populated, it needs to be maintained. Various database parameters may need to be changed and the database may need to be set (tuned) for better performance; the application data structure can be changed or added, new related application programs can be written to add application functionality, etc.

Backup and restore

Sometimes it is desirable to bring the database back to its previous state (for many reasons, for example, cases when the database is found to be corrupted by software error, or if it has been updated with incorrect data). To achieve this, backup operations are occasionally or continuously, where each desired database status (eg, its data value and insertion in the database data structure) is stored in a special backup file (many techniques exist to do this effectively). When this is necessary, that is, when it is decided by the database administrator to bring the database back to this state (for example, by specifying this status with the desired point in time when the database is in this state), these files are used to recover the status.

Static analysis

Static analysis techniques for software verification can be applied also in query language scenarios. In particular, Abstract interpretation frameworks have been extended to the query language field for relational databases as a way to support sound approach techniques. The semantics of the query language can be set according to the appropriate abstraction of the concrete data domain. Relational database system abstraction has many interesting applications, in particular, for security purposes, such as fine grained access control, watermarking, etc.

More

Other DBMS features may include:

  • Database log - This helps store the history of the running function.
  • Graph components to generate graphs and charts, especially in data warehouse systems
  • Query optimizer - Perform query optimization on each query to select an efficient query plan (partial command (tree) operation) to be executed to calculate query results. May be specific to certain storage engines.
  • Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (DBMS and related databases can reach computers, networks and storage units) and related database mapping (especially for distributed DBMS), storage allocation and monitoring of database layout, storage migration, etc.
  • More and more, there is a single system call that combines all of these core functions into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some offer markets such as "DevOps for database".

Mind Design - Dutch Design Database
src: www.minddesign.info


See also


Finding The Right Database For Your Startup | Al-Rasub
src: www.alrasub.com


Note


Database Design, Hosting Services, MySQL, Microsoft SQL Server
src: innotechllc.us


References


EMDB - Eric's Movie Database Alternatives and Similar Software ...
src: d2.alternativeto.net


Source


C# Database Programming for Beginners | Part 1 - Creating a SQL ...
src: i.ytimg.com


Further reading


Bookstore Database â€
src: linhpnguyen1705.files.wordpress.com


External links

Source of the article : Wikipedia

Comments
0 Comments