jvzoo
Comodo SSL
Expand
Collapse

kimbersoft.com Study

Page:   Study   .   7 Liberal Arts   .   Hermetics   .   Web   .   Technical Fast Track

FT.png

Knowledge Management

I will promote your site here . maximum 380 text characters
in return for a link to this page from your site

Objective   10/13/2017

My focus is Personal KM.

Knowledge Management   10/13/2017

from Wikipedia
dmoztools.net/Reference/Knowledge_Management/

Knowledge management (KM) is the process of creating, sharing, using and managing the knowledge and information of an organisation.[1] It refers to a multidisciplinary approach to achieving organisational objectives by making the best use of knowledge.[2]

An established discipline since 1991, KM includes courses taught in the fields of business administration, information systems, management, library, and information sciences.[3][4] Other fields may contribute to KM research, including information and media, computer science, public health and public policy.[5] Several universities offer dedicated master's degrees in knowledge management.

Many large companies, public institutions and non-profit organisations have resources dedicated to internal KM efforts, often as a part of their business strategy, IT, or human resource management departments.[6] Several consulting companies provide advice regarding KM to these organisations.[6]

Knowledge management efforts typically focus on organisational objectives such as improved performance, competitive advantage, innovation, the sharing of lessons learned, integration and continuous improvement of the organisation.[7] These efforts overlap with organisational learning and may be distinguished from that by a greater focus on the management of knowledge as a strategic asset and on encouraging the sharing of knowledge.[2][8] KM is an enabler of organisational learning.[9][10]

KM technologies

Knowledge management (KM) technology can be categorised:

Groupware—Software that facilitates collaboration and sharing of organisational information. One of the earliest successful products in this category was Lotus Notes: it provided tools for threaded discussions, document sharing, organisation-wide uniform email, etc.

Workflow systems—Systems that allow the representation of processes associated with the creation, use and maintenance of organisational knowledge. For example, the process to create and utilise forms and documents.

Content management and document management systems—Software systems that automate the process of creating web content and/or documents. Roles such as editors, graphic designers, writers and producers can be explicitly modeled along with the tasks in the process and validation criteria. Commercial vendors started either to support documents (e.g. Documentum) or to support web content (e.g. Interwoven) but as the Internet grew these functions merged and vendors now perform both functions.

Enterprise portals—Software that aggregates information across the entire organisation or for groups such as project teams (e.g. Microsoft SharePoint).

eLearning—Software that enables organisations to create customised training and education. This can include lesson plans, monitoring progress and online classes.

Planning and scheduling software—Software that automates schedule creation and maintenance (e.g. Microsoft Outlook). The planning aspect can integrate with project management software such as Microsoft Project.[22]

Telepresence—Software that enables individuals to have virtual "face-to-face" meetings without assembling at one location. Videoconferencing is the most obvious example.

These categories overlap. Workflow, for example, is a significant aspect of a content or document management systems, most of which have tools for developing enterprise portals.[7][48]

Proprietary KM technology products such as Lotus Notes defined proprietary formats for email, documents, forms, etc. The Internet drove most vendors to adopt Internet formats. Open-source and freeware tools for the creation of blogs and wikis now enable capabilities that used to require expensive commercial tools.[34][49]

KM is driving the adoption of tools that enable organisations to work at the semantic level,[50] as part of the Semantic Web:[51] for example, the Stanford Protégé Ontology Editor. Some commentators have argued that after many years the Semantic Web has failed to see widespread adoption,[52][53][54] while other commentators have argued that it has been a success.[55]


Cybernetics  10/13/2017

from Wikipedia

Cybernetics is a transdisciplinary[1] approach for exploring regulatory systems—their structures, constraints, and possibilities. Norbert Wiener defined cybernetics in 1948 as "the scientific study of control and communication in the animal and the machine."[2] In the 21st century, the term is often used in a rather loose way to imply "control of any system using technology." In other words, it is the scientific study of how humans, animals and machines control and communicate with each other.

Cybernetics is applicable when a system being analyzed incorporates a closed signaling loop—originally referred to as a "circular causal" relationship—that is, where action by the system generates some change in its environment and that change is reflected in the system in some manner (feedback) that triggers a system change. Cybernetics is relevant to, for example, mechanical, physical, biological, cognitive, and social systems. The essential goal of the broad field of cybernetics is to understand and define the functions and processes of systems that have goals and that participate in circular, causal chains that move from action to sensing to comparison with desired goal, and again to action. Its focus is how anything (digital, mechanical or biological) processes information, reacts to information, and changes or can be changed to better accomplish the first two tasks.[3] Cybernetics includes the study of feedback, black boxes and derived concepts such as communication and control in living organisms, machines and organizations including self-organization.

Concepts studied by cyberneticists include, but are not limited to: learning, cognition, adaptation, social control, emergence, convergence, communication, efficiency, efficacy, and connectivity. In cybernetics these concepts (otherwise already objects of study in other disciplines such as biology and engineering) are abstracted from the context of the specific organism or device.

The word cybernetics comes from Greek ??ße???t??? (cybernetic?), meaning "governance", i.e., all that are pertinent to ??ße???? (cybernáo), the latter meaning "to steer, navigate or govern", hence ??ß????s?? (cybérnesis), meaning "government", is the government while ??ße???t?? (cybern?tes) is the governor or "helmperson" of the "ship". Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences. During the second half of the 20th century cybernetics evolved in ways that distinguish first-order cybernetics (about observed systems) from second-order cybernetics (about observing systems).[4] More recently there is talk about a third-order cybernetics (doing in ways that embraces first and second-order).[5]

Studies in cybernetics provide a means for examining the design and function of any system, including social systems such as business management and organizational learning, including for the purpose of making them more efficient and effective. Fields of study which have influenced or been influenced by cybernetics include game theory, system theory (a mathematical counterpart to cybernetics), perceptual control theory, sociology, psychology (especially neuropsychology, behavioral psychology, cognitive psychology), philosophy, architecture, and organizational theory.[6] System dynamics, originated with applications of electrical engineering control theory to other kinds of simulation models (especially business systems) by Jay Forrester at MIT in the 1950s, is a related field.

Intelligence Quotient (IQ)  10/13/2017

from Wikipedia

An intelligence quotient (IQ) is a total score derived from several standardized tests designed to assess human intelligence. The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book.[1] Historically, IQ is a score obtained by dividing a person’s mental age score, obtained by administering an intelligence test, by the person’s chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score.[2] When current IQ tests were developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less,[3] although this was not always so historically. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 5 percent of the population scores above 125, and 5 percent below 75.[4][5]

Scores from intelligence tests are estimates of intelligence because concrete measurements (e.g. distance, mass) cannot be achieved given the abstract nature of the concept of "intelligence".[6] IQ scores have been shown to be associated with such factors as morbidity and mortality,[7][8] parental social status,[9] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[10][11] and the mechanisms of inheritance.[12]

IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed.[13] In research contexts they have been studied as predictors of job performance, and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.

High IQ societies

There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile (2 standard deviations above the mean) on some IQ test or equivalent. Mensa International is perhaps the best known of these. The largest 99.9th percentile (3 standard deviations above the mean) society is the Triple Nine Society.

Mensa International  10/13/2017

from Wikipedia

Mensa is the largest and oldest high IQ society in the world.[3][4][5] It is a non-profit organization open to people who score at the 98th percentile or higher on a standardized, supervised IQ or other approved intelligence test.[6][7] Mensa formally comprises national groups and the umbrella organization Mensa International, with a registered office in Caythorpe, Lincolnshire, England[8] (which is separate from the British Mensa office in Wolverhampton[9]). The word mensa (/'m?ns?/; Latin: ['mensa]) means "table" in Latin, as is symbolized in the organization's logo, and was chosen to demonstrate the round-table nature of the organization; the coming together of equals.[10]

Roland Berrill, an Australian barrister, and Dr. Lancelot Ware, a British scientist and lawyer, founded Mensa at Lincoln College, in Oxford, England, in 1946. They had the idea of forming a society for very intelligent people, the only qualification for membership being a high IQ.[6] It was to be non-political and free from all other social distinctions (racial, religious, etc.).[10]

American Mensa was the second major branch of Mensa. Its success has been linked to the efforts of its early and longstanding organizer, Margot Seitelman.[11]

Berrill and Ware were both disappointed with the resulting society. Berrill had intended Mensa as "an aristocracy of the intellect", and was unhappy that a majority of Mensans came from humble homes,[12] while Ware said, "I do get disappointed that so many members spend so much time solving puzzles".[13]

Membership requirement

Mensa's requirement for membership is a score at or above the 98th percentile on certain standardised IQ or other approved intelligence tests, such as the Stanford–Binet Intelligence Scales. The minimum accepted score on the Stanford–Binet is 132, while for the Cattell it is 148.[14] Most IQ tests are designed to yield a mean score of 100 with a standard deviation of 15; the 98th-percentile score under these conditions is 130.

Most national groups test using well established IQ test batteries, but American Mensa has developed its own application exam. This exam is proctored by American Mensa and does not provide a score comparable to scores on other tests; it serves only to qualify a person for membership.[citation needed] In some national groups, a person may take a Mensa-offered test only once, although one may later submit an application with results from a different qualifying test.[14]

Mission

Mensa's constitution lists three purposes: "to identify and to foster human intelligence for the benefit of humanity; to encourage research into the nature, characteristics, and uses of intelligence; and to provide a stimulating intellectual and social environment for its members".[15]

To these ends, the organization is also involved with programs for gifted children, literacy, and scholarships, and it also holds numerous gatherings including an annual summit.

Ontology_(information_science)  10/13/2017

from Wikipedia

In computer science and information science, an ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. It is thus a practical application of philosophical ontology, with a taxonomy.

An ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them.[1][2]

The fields of artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture all create ontologies to limit complexity and to organize information. The ontology can then be applied to problem solving.

In the domain of knowledge graph computation, the knowledge density is the average number of attributes and binary relation issued from a given entity, it is commonly measured in facts per entity.[3]

Overview

What ontologies have in common in both computer science and philosophy is the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontological relativity (e.g., Quine and Kripke in philosophy, Sowa and Guarino in computer science),[5] and debates concerning whether a normative ontology is viable (e.g., debates over foundationalism in philosophy, and over the Cyc project in AI). Differences between the two are largely matters of focus. Computer scientists are more concerned with establishing fixed, controlled vocabularies, while philosophers are more concerned with first principles, such as whether there are such things as fixed essences or whether enduring objects must be ontologically more primary than processes.

Other fields make ontological assumptions that are sometimes explicitly elaborated and explored. For instance, the definition and ontology of economics (also sometimes called the political economy) is hotly debated especially in Marxist economics[6] where it is a primary concern, but also in other subfields.[7] Such concerns intersect with those of information science when a simulation or model is intended to enable decisions in the economic realm; for example, to determine what capital assets are at risk and if so by how much (see risk management). Some claim all social sciences have explicit ontology issues because they do not have hard falsifiability criteria like most models in physical sciences and that indeed the lack of such widely accepted hard falsification criteria is what defines a social or soft science.[citation needed]

Domain ontology

A domain ontology (or domain-specific ontology) represents concepts which belong to part of the world. Particular meanings of terms applied to that domain are provided by domain ontology. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.

Since domain ontologies represent concepts in very specific and often eclectic ways, they are often incompatible. As systems that rely on domain ontologies expand, they often need to merge domain ontologies into a more general representation. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).

At present, merging ontologies that are not developed from a common foundation ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same foundation ontology to provide a set of basic elements with which to specify the meanings of the domain ontology elements can be merged automatically. There are studies on generalized techniques for merging ontologies,[14] but this area of research is still largely theoretical.

Protégé (software)  10/13/2017

from Wikipedia    Protégé (software)

Protégé is a free, open source ontology editor and a knowledge management system. Protégé provides a graphic user interface to define ontologies. It also includes deductive classifiers to validate that models are consistent and to infer new information based on the analysis of an ontology. Like Eclipse, Protégé is a framework for which various other projects suggest plugins. This application is written in Java and heavily uses Swing to create the user interface. Protégé recently has over 300,000 registered users.[4] According to a 2009 book it is "the leading ontological engineering tool".[5]

Protégé is being developed at Stanford University and is made available under the BSD 2-clause license.[6] Earlier versions of the tool were developed in collaboration with the University of Manchester.

The Stanford–Binet Intelligence Scales (or more commonly the Stanford-Binet)   10/13/2017

from Wikipedia

The Stanford–Binet Intelligence Scales (or more commonly the Stanford-Binet) is an individually administered intelligence test that was revised from the original Binet-Simon Scale by Lewis M. Terman, a psychologist at Stanford University. The Stanford–Binet Intelligence Scale is now in its fifth edition (SB5) and was released in 2003. It is a cognitive ability and intelligence test that is used to diagnose developmental or intellectual deficiencies in young children. The test measures five weighted factors and consists of both verbal and nonverbal subtests. The five factors being tested are knowledge, quantitative reasoning, visual-spatial processing, working memory, and fluid reasoning.

The development of the Stanford–Binet initiated the modern field of intelligence testing and was one of the first examples of an adaptive test. The test originated in France, then was revised in the United States. It was initially created by the French psychologist Alfred Binet, who, following the introduction of a law mandating universal education by the French government, began developing a method of identifying "slow" children for their placement in special education programs (rather than removing them to asylums as "sick").[1] As Binet indicated, case studies might be more detailed and helpful, but the time required to test many people would be excessive. In 1916, at Stanford University, the psychologist Lewis Terman released a revised examination which became known as the "Stanford–Binet test".

Historical use

One hindrance to widespread understanding of the test is its use of a variety of different measures. In an effort to simplify the information gained from the Binet-Simon test into a more comprehensible and easier to understand form, German psychologist William Stern created the now well known Intelligence Quotient (IQ). By comparing the age a child scored at to their biological age, a ratio is created to show the rate of their mental progress as IQ. Terman quickly grasped the idea for his Stanford revision with the adjustment of multiplying the ratios by 100 to make them easier to read.

As also discussed by Leslie, in 2000, Terman was another of the main forces in spreading intelligence testing in the United States (Becker, 2003). Terman quickly promoted the use of the Stanford–Binet for schools across the United States where it saw a high rate of acceptance. Terman’s work also had the attention of the U.S. government, who recruited him to apply the ideas from his Stanford–Binet test for military recruitment near the start of World War I. With over 1.7 million military recruits taking a version of the test and the acceptance of the test by the government, the Stanford–Binet saw an increase in awareness and acceptance (Fancher & Rutherford, 2012).

Given the perceived importance of intelligence and with new ways to measure intelligence, many influential individuals, including Terman, began promoting controversial ideas to increase the nation's overall intelligence. These ideas included things such as discouraging individuals with low IQ from having children and granting important positions based on high IQ scores. While there was significant opposition, many institutions proceeded to adjust students' education based on their IQ scores, often with a heavy influence on future career possibilities (Leslie, 2000).

Stanford–Binet Intelligence Scale: Fifth Edition[edit]

Just as it was used when Binet first developed the IQ test, the Stanford–Binet Intelligence Scale: Fifth Edition (SB5) is based in the schooling process to assess intelligence. It continuously and efficiently assesses all levels of ability in individuals with a broader range in age. It is also capable of measuring multiple dimensions of abilities (Ruf, 2003).

The SB5 can be administered to individuals as early as two years of age. There are ten subsets included in this revision including both verbal and nonverbal domains. Five factors are also incorporated in this scale, which are directly related to Cattell-Horn-Carroll (CHC) hierarchical model of cognitive abilities. These factors include fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory (Bain & Allin, 2005). Many of the familiar picture absurdities, vocabulary, memory for sentences, and verbal absurdities still remain from the previous editions (Janzen, Obrzut, & Marusiak, 2003) however with more modern artwork and item content for the revised fifth edition.

For every verbal subtest that is used there is a nonverbal counterpart across all factors. These nonverbal tasks consist of making movement responses such as pointing or assembling manipulatives (Bain & Allin, 2005). These counterparts have been included in order to address the language-reduced assessments in multicultural societies. Depending on age and ability, administration can range from fifteen minutes to an hour and fifteen minutes.

The fifth edition incorporated a new scoring system, which can provide a wide range of information such as four intelligence score composites, five factor indices, and ten subtest scores. Additional scoring information includes percentile ranks, age equivalents, and a change-sensitive score (Janzen, Obrzut, & Marusiak, 2003). Extended IQ scores and gifted composite scores are available with the SB5 in order to optimize the assessment for gifted programs (Ruf, 2003). In order to reduce errors and increase diagnostic precision, scores are obtained electronically through the use of computers now.

The standardization sample for the SB5 included 4,800 participants varying in age, sex, race/ethnicity, geographic region, and socioeconomic level (Bain & Allin, 2005).

Alexa.png dmoz.png

kimbersoft.com is hosted on a re-seller Virtual Private Server

This page was last updated October 13th, 2017 by kim

Where wealth like fruit on precipices grew.

SEO Links    SEM Links   .   Traffic   .   Traffup   

kimbersoft.com YouTube.png kimbersoft.com google+.png kimbersoft.com Twitter