Which of the following is a similarity between political deviance and personal aggression?

Scaling: Multidimensional

J. de Leeuw, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.3 Genetics/Systematics

A very early application of a scaling technique was Fisher (1922). He used crossing-over frequencies from a number of loci to construct a (one-dimensional) map of part of the chromosome. Another early application of MDS ideas is in Boyden (1931), where reactions to sera are used to give similarities between common mammals, and these similarities are then mapped into three-dimensional space.

In much of systematic zoology, distances between species or individuals are actually computed from a matrix of measurements on a number of variables describing the individuals. There are many measures of similarity or distance which have been used, not all of them having the usual metric properties. The derived dissimilarity or similarity matrix is analyzed by MDS, or by cluster analysis, because systematic zoologists show an obvious preference for tree representations over continuous representations in Rp.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767005015

Workplace Deviance

Rebecca Bennett, Shelly Marasi, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Typology

The typology of workplace deviance developed by Robinson and Bennett (1995) used multidimensional scaling techniques to identify the underlying vast array of deviant workplace behaviors: (1) whether the potential harm was serious or minor and (2) the entity that was the target of the deviant act (employees or other individuals associated with the organization, such as customers and suppliers, which is referred to as interpersonal deviance, or the organization itself, which is referred to as organizational deviance). The overlap of the two orthogonal dimensions resulted in four different forms of workplace deviance. Political deviance involves minor deviant acts directed toward individuals at work, such as gossiping about coworkers; whereas, personal aggression entails serious deviant acts targeted toward individuals at work, such as sexual harassment. Production deviance refers to minor deviant acts aimed at the organization, such as leaving work early or arriving to work late; whereas, property deviance involves serious deviant acts targeted toward the organization, such as sabotaging equipment (Robinson and Bennett, 1995). Refer to Figure 1 for a complete overview of the typology, including examples of the four different types of workplace deviance.

Which of the following is a similarity between political deviance and personal aggression?

Figure 1. Typology of workplace deviance.

Taken from Robinson, S.L., Bennett, R.J., 1995. The typology of deviant workplace behaviors: a multidimensional scaling study. Academy of Management Journal 38, 555–572.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868220060

Perceptual Learning

Shimon Edelman, Nathan Intrator, in Psychology of Learning and Motivation, 1997

B. Implications

The exploration of the metric and the dimensional structure of psychological spaces has been boosted by the improvement of the metric scaling techniques and by the development of nonmetric multidimensional scaling in the early 1960s (Shepard, 1966; Kruskal, 1964). By 1980, a general pattern was emerging from a large variety of perceptual scaling experiments: The subject's performance in tasks involving similarity judgment or perception can be accounted for to a substantial degree by postulating that the perceived similarity directly reflects the metric structure of an underlying perceptual space, in which the various stimuli are represented as points (Shepard, 1980).4

This pattern has not escaped the attention of theoretical psychologists. In a paper which appeared on the tri-centennial anniversary of the publication of Newton's Philosophiae Naturalis Principia Mathematica, and was motivated by a quest for psychological laws that would match those of mechanics, Shepard (1987) proposed a law of generalization that tied the likelihood of two stimuli evoking the same response to the proximity of the stimuli in a psychological representation space—the same space that so persistently turned out to be low-dimensional in the experiments surveyed in (Shepard, 1980).

The significance of Shepard's insight is twofold. First, the introduction of the notion of a psychological space puts novel stimuli on an equal footing with familiar ones: A point corresponding to a novel stimulus is always located somewhere in the representation space; all one has to do is characterize its location with respect to the familiar points. The great importance of generalization stems from the fact that the visual system literally never encounters the same stimulus twice: there are always variations in the viewing conditions such as illumination; objects look different from different viewpoints; articulated and flexible objects change their shape. Mere memory for past stimuli, faithful and extensive as it may be, is, therefore, a poor guide for behavior. In contrast, a suitable representation space can help the system concentrate on the relevant features of the stimulus, which, presumably, remain invariant.5 In such a space, proximity is a reliable guide for generalization. Shepard's (1987) work shows that the validity of proximity as the basis for generalization is universal, and can be derived from first principles.

The second reason behind the importance of having a common space for the representation of a range of perceptual qualities in any given task has to do with the low dimensionality of such a space. This point became gradually clear only recently, with the emergence of formal approaches to the quantification of complexity of learning problems. Whereas in some perceptual tasks (such as color vision) low dimensionality of the representation stems naturally from the corresponding low dimensionality of the stimulus space, in other tasks (notably, in object shape recognition) the situation is less clear, although there are some indications that a useful common low-dimensional parameterization of diverse shapes can be achieved (see Fig. 1).

Which of the following is a similarity between political deviance and personal aggression?

Fig. 1. Top. Images of several three-dimensional objects. Middle. Images of the same objects, parameterized with 15625 parameters and re-rendered. The parameterization was defined by computing the occupancy indices for each voxel in a 25 × 25 × 25 subdivision of the volume of each object. Bottom. Images rendered from the representations of the objects in a common five-dimensional parameter space, obtained from the high-dimensional voxel-based space using principal component analysis. If the parameterization is carried out in this manner, it will depend on the choice of objects, because the latter determines the set of basis functions that span the object space. If universal basis functions for shape, such as deformation modes, are used, the parameterization will be universal too (although its dimensionality is likely to be somewhat higher). In any case, the possibility of such parameterization indicates that a low-dimensional distal shape space may provide a basis for shape representation that is just as powerful as the low-dimensional spaces of naturally occurring illumination and reflectance spectra, discussed in section I.A.

(Data courtesy of S. Duvdevani-Bar.)

In the case of object recognition, it is tempting to argue that one should use the multidimensional signal as is, because the shape information that the visual system needs it certainly present there: “The photoreceptors are […] necessarily capable of coding, by their population response, any conceivable stimulus. Why are subsequent populations needed?” (Desimone & Ungerleider, 1989, p. 268).6 We now know that this approach to representation is untenable, as far as learning to recognize objects from examples is concerned. The reason for this is related to the notion of the curse of dimensionality: the number of examples necessary for reliable generalization grows exponentially with the number of dimensions (Bellman, 1961; Stone, 1982). Learnability thus necessitates dimensionality reduction.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0079742108602881

Time Use Research: Recent Developments

John P. Robinson, Teresa A. Harms, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

MDS Applied to Szalai's (1965) Multinational Data

The multinational time-diary data from the pioneering time-diary study of Szalai (1972) became available soon after the SSA/MDS technique was developed. Converse (1972) applied MDS to a matrix of differences in the daily times spent by people in each country on a range of activities as shown in Table 1. His resulting two-dimensional MDS visualizations provided immediate and plausible insights into how similar life was in the different national settings involved in the study. Converse (1972) succinctly described the resulting MDS diagram as a picture that “bears a substantial resemblance to a map of the western world…It is remarkable that statistical compression of these raw data yields anything resembling a physical map” (p. 150).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868321961

Issue Constraint in Political Science

S. Ansolabehere, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Aggregate Constraint

Ideology, or more generally the notion of issue constraint, is useful to the extent that it allows people to sort out the political beliefs of those involved in politics in some simple way, say, from liberal to conservative. Shorthand assessments of politics are extremely common. The Americans for Democratic Action, the American Conservative Union, and many other interest groups publish annual ratings of legislators, which rank politicians from 0 to 100 according to legislators' consistency with the groups' ideology. Survey research organizations and interest groups publish similar classifications of voter ‘types’ based on clusters of answers to questions, ranging from abortion to zero-based budgeting. In fact, several public interest groups wishing to improve voter decisions offer simple on-line questionnaires that allow people to evaluate their own ‘ideology’ and compare it with the candidates' (e.g., GoVote.Com (www.govote.com)).

A central question for political scientists who analyze ideology is whether politics boils down to one dimension or multiple dimensions. The greatest degree of aggregate issue constraint is provided by positing one dimension; that is, everyone sorts out along a continuum from one extreme (say, liberal) to another (say, conservative). The least degree of aggregate constraint is when every issue differs so the number of dimensions over which people take differing political stances equals the number of issues on the public agenda. For social choice theory, larger numbers of dimensions create analytical difficulty and lead social scientists to the conclusion that institutions that constrain the political agenda and thus open political discourse are needed.

What does the dimensionality of politics look like? The most intensive study of the subject has been of roll-call votes in the US Congress. The US Congress typically takes over 1,000 roll-call votes each year covering hundreds of different policy concerns and issues. Using statistical scaling techniques, political scientists analyze the correlations across all roll call votes to determine how many different factors describe the behavior of legislators. (Two different techniques have been developed expressly for this purpose and are widely used by congressional scholars. Poole and Rosenthal (1997) use a nonlinear maximum likelihood technique, which assumes probabilistic voting and normal probability distributions for each vote. Heckman and Snyder (1997) assume probabilistic voting in a spatial model with uniform probabilities, which implies a linear factor analysis model with boundary constraints.) The number is very small, possibly just one. One factor dominates congressional roll call voting, and it is most strongly distinguished by legislators' votes on important economic issues, such as taxation, Social Security, education, welfare, and the minimum wage.

This factor explains seven times more of the variation in roll-call votes than any other factor. Other, much less important dimensions, tap conflicts over civil liberties and international relations. In the 1950s 1960s, and 1970s, race served as a much stronger second dimension of conflict in American politics; it has since been subsumed by the first dimension. (For details on the dimensionality of Congress, see Poole and Rosenthal (1997) and Heckman and Snyder (1997).)

Studies of aggregate constraint among the public have been limited by the nature of the survey enterprise. The costs of interviewing large samples within legislative districts or asking detailed questions about dozens of policies are prohibitive. However, within the present limitations, only a handful of dimensions typically appear, with the main dimension tapping attitudes about the economic authority or size of government. A fundamental conjecture of representative politics is that elected officials are responsive to the citizenship. This general idea has been applied to questions of issue constraint. Do elected officials show the same sort of issue constraint on their beliefs that the public exhibits? This question has proved incredibly hard to study, owing to the enormous cost of doing extensive elite and mass surveys. Many studies of this question suggest that the degree of constraint differs between the elite and mass levels and that the degree of agreement between elites and masses varies across issues. In his classic article ‘On the nature of mass belief systems,’ Philip Converse (1964) documented significantly lower correlations across issues among the masses than among elites, and suggested that people with higher education levels exhibited higher degrees of constraint. It is unclear whether there is much association, then, between the belief systems of mass publics and the belief systems of elites.

Evidence of responsiveness or aggregate constraint comes, again, from the US Congress. A number of important studies have attempted to measure the responsiveness of elected officials to voters by associating roll-call voting behavior with mass attitudes. Political scientists have found strong association between district ideology, proxied by presidential vote, and roll-call voting behavior (e.g., Ansolabehere et al. 1999). Perhaps the most striking evidence of linkages between mass beliefs and elite beliefs and behavior comes by taking a broader historical perspective. Over time, the policies embraced by the courts, Congress, the President, and the electorate move strongly together. Across a wide range of concerns on the public agenda, as the populace moved to the left from the 1950s to the 1960s, so too did American institutions and policy, and as the populace moved to the right in the 1980s, so too did institutions and policy (Mayhew 1991). Without settling who—masses or elites—moved first, these broad historical patterns provide strong evidence of important issue constraint in American politics.

Despite empirical concerns about the degree of issue constraint in public thinking, ideology remains a central organizing principle for political science. The reason is parsimony. For observers of politics, ideology is a useful shorthand; for voters, who have to sort out complicated choices, it may be essential.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767011608

Language Abnormalities in Psychosis: Evidence for the Interaction between Cognitive and Linguistic Mechanisms

Joseph I. Tracy, in Handbook of Neurolinguistics, 1998

35-1 MAJOR RESEARCH PERSPECTIVES

35-1.1 The Speech—”Thought”—Disorder of Schizophrenia and Mania

Thought disorder in psychiatric disorders was originally described by Bleuler (1911). By the “Bleulerian” view, speech anomalies in schizophrenia patients were spurned by abnormal conceptual structures or associative processes directly related to the psychotic state (e.g, bizarre mental content, autistic thought, tangentiality, poverty of speech, non sequiturs, derailment, neologisms, and “word salads”). The thought disorder of mania differed in that it involved more prominent flight of ideas and circumstantial speech. By this view, the mechanics of phonology, syntax, and semantics, when examined in isolation, were intact in the psychoses and placed no constraints on speech or comprehension.

Psychiatry, in an attempt to operationalize the “Bleulerian” position and despite evidence that thought and language are not isomorphic (see Rieber & Vetter, 1994), developed the tradition of measuring thought disorder through verbal behavior, that is, speech, making it impossible to empirically segregate true linguistic failures from disturbances in thought content or structure, or cognitive processes (e.g., The Scale for the Assessment of Thought, Language, and Communication Disorders, TLC; Andreasen, 1982).1 Research in the “Bleulerian” tradition has sought to determine if “thought disorder” symptoms reflected disturbances in the underlying structure (actual semantic linkages) or function (speed or spread of activation) of lexical networks. Most studies have failed to demonstrate the former, that is, schizophrenics are not more prone to unusual responses on word association tasks (Laffal, 1965; see Cohen, 1978). Recently, however, Aloia, Gourovitch, Weinberger, and Goldberg (1996) reported results implying abnormality in the structure of semantic networks of chronic schizophrenia patients. Applying multidimensional scaling techniques to category fluency data, they found that, compared to normals, patients were less likely to group exemplars into subordinate clusters and produced category exemplars that “did not follow any logical ordering in two-dimensional space” (p. 270).

Maher’s work (1972) reflected the “Bleulerian” position and investigated the function of semantic networks. For instance, Manschreck, Maher, Celada, Schneyer, and Fernandez (1991) reported a higher ratio of objects to subjects in thought-disordered individuals (object chaining, i.e., listing objects at the end of a sentence) and attributed this to an attentional deficit that promotes associative intrusions. Research using lexical priming has built considerable evidence suggesting that schizophrenia thought intrusions and derailments in discourse arise from the hyperactive functioning of semantic networks. For instance, Kwapil, Hegley, Chapman, and Chapman (1990) used a degraded-word recognition task and found increased semantic priming in schizophrenia patients compared to bipolars and normal controls. This effect has been delineated by Spitzer et al., (1994), who utilized both semantic and phonologic priming tasks. Spitzer et al. found more pronounced semantic priming in thought-disordered schizophrenia patients, and suggested that information spreads more quickly and farther through the semantic networks of these individuals. They also observed phonologic inhibition in normals (to permit unimpeded articulation), but not in thought-disordered patients. Spitzer et al. concluded that thought disorder was associated with more highly activated semantic and more disinhibited phonemic networks. Spitzer’s work is an advance, specifying well the associational disturbance discussed by Bleuler and Maher, but leaves unclear the exact type of psychotic speech such abnormal activation causes and how this abnormality interacts with normal language operations.

35-1.2 Schizophrenic or Manic Speech as Aphasia

Kraepelin (1971/1919), so struck by the similarity with aphasia, used the term “schizophasia” to describe schizophrenic speech. Indeed, the at least superficial similarity of schizophrenic to fluent aphasic speech prompted many comparisons, with the hope that they could be understood by the same etiologic mechanism. The hypothesis of this perspective is that the speech disorder found in some psychotic patients constitutes a deficit in primary language mechanisms. Prominently representing this position, Chaika (1974, 1982) utilized taped interviews of a schizophrenia patient to identify characteristics that “suggest a disruption in the ability to apply those rules which organize linguistic elements, such as phonemes, words, and sentences, into corresponding meaningful structures, namely words, sentences, and discourse” (1974, p. 275). She suggested that schizophrenic speech represented an “intermittent aphasia,” distinct from disordered thought that correlated with acute psychotic states, and was more prominent in the nonmedicated (for criticism see Fromkin, 1975).

Several empirical studies have distinguished schizophrenia from aphasia (Gerson, Benson, & Frazier, 1977). Faber et al., (1983) found that thought-disordered schizophrenia patients had better auditory comprehension and use of complex phrases and polysyllabic words than aphasics, but more private word usage. Faber et al. (1983) argued that no classic aphasia syndrome existed for schizophrenia despite sharing features with fluent aphasics (fluent/spontaneous speech, idiosyncratic word usage, paraphasic responses, empty speech, and so forth). Landre, Taylor, and Kearns (1992) found that speech-disordered schizophrenia patients and fluent aphasics were identical in language comprehension, single-word naming, repetition, and spontaneous speech (e.g., semantic paraphasias, unclear reference). Landre et al. also found that an index of general intelligence was strongly associated with language performance in the schizophrenia patients, leading the authors to attribute the problems to a general cognitive deficit.

Methodological issues plague comparative work with aphasia. Seldom are lesion locations matched within the aphasic sample. Regarding psychosis, standardized diagnostic procedures are not always used, and patients are often medicated and chronically institutionalized, making unclear the attribution of deficits to a primary speech/language disorder. There are many reasons to differentiate schizophrenia from aphasia. Psychodynamically oriented writers such as Sass (1992) have noted that the intermittent nature of schizophrenic speech indicates a dependence on social contexts such as stress, and perhaps a motivation to avoid focus on threatening material. Others have noted that the speech of psychotic patients does not show a recovery curve as aphasics do, and, unlike aphasia, appears resistant to speech therapy. Finally, no clear dissociations of language functions has been shown in psychotic speech, as might be expected if it was an aphasia (intact fluency with impaired comprehension). Many workers would accept Faber et al.’s (1983) conclusion that similarities with aphasia exist, yet few would argue that psychosis brings with it an aphasic disorder. Interestingly, even those that liken schizophrenia to aphasia, such as Chaika, consider the etiologic mechanisms of it and aphasia to be different.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126660555500393

Concepts and Categorization

Douglas L. Medin, John D. Coley, in Perception and Cognition at Century's End, 1998

2 Feature Independence

The debate between prototype and exemplar theories has been about how featural information is integrated across examples and not about the nature of the features themselves. Indeed, predictions of category models depend crucially on being able to specify what the features are and having the research participants’ analysis of stimuli agree with that of the experimenter. Researchers take care to ensure that agreement is realized either by using simple stimuli with salient dimensions (e.g., geometric shapes differing in size and color) or by employing multidimensional scaling techniques to identify (or confirm) dimensions (e.g., Nosofsky, 1987). Although this general strategy is absolutely necessary for proper theoretical contrasts, it does carry with it the implicit assumption that the specific realization of abstract category structure is not important. To be sure, researchers are vigilant about establishing the generality of their results by employing a variety of stimuli. Nevertheless, this variety may tend to be biased toward realizations where the features of examples are independent and unrelated to each other (e.g., a triangle can be any color).

Are the properties or features of concepts generally independent? Probably not. First of all, the components of concepts are unlikely to have the status of being primitive features. Consider, for example, the features people typically list for the category bird: living, laying eggs, flying, having wings and feathers, singing, building nests, and so on. Each of these “features” is itself a complex concept with both an internal structure and an external structure based on interproperty relationships. For example, laying eggs implies a living organism and building nests is, in part, in service of protecting eggs. Flying requires wings and affords building nests in trees. In short, rather than independent features one has a web of relationships. This fact not only raises questions about how to define similarity over entities with interrelated features but also calls attention to issues of how the phenomena that we have been discussing might change as a function of nonindependent feature structure.

And change they do. Consider again, linear separability, which we mentioned could be characterized in terms of a summing of evidence against a criterion. In a series of experiments, Wattenmaker, Dewey, Murphy, and Medin (1986) found that linearly separable categories were easier to learn than nonlinearly separable categories, if the stimuli or instructions facilitated interproperty coding that was compatible with a summing of evidence (features). An example from one of their studies involved a category whose modal properties were “made of metal,” “medium-sized,” “has a regular surface,”and “easy to grasp.” The contrasting category had differing modal values on each of these dimensions. Out of context, one can think of many interproperty relationships, and no one appears to be particularly salient. In category-learning conditions run without further instructional elaboration, no advantage for linearly separable categories was observed. In a second condition, however, participants were given the further hint that the objects in one category might serve as a substitute for a hammer. This hint improved performance overall to some extent, but most striking was the observation that the linearly separable category structure was now much easier than the nonlinearly separable structure. This result might appear to support prototype theory, but remember that prototype theory requires featural independence (nor can it explain when linear separability matters and when it does not).

This selective facilitation of linearly separable categories reflects something beyond making the stimulus materials more meaningful. In other studies Wattenmker et al showed that instructional hints could selectively improve performance on nonlinearly separable categories—the key factor was the relationship between the types of interproperty encoding induced and the category structure. A clear implication of these findings is that one cannot make generalizations in terms of abstract category structures and expect them to carry through across contexts, because different types of interproperty or relational coding may take place (see Medin et al., 1987, and Murphy & Spalding, 1996, for corresponding observations from sorting tasks. Family-resemblance sorting is readily observed under appropriate relational coding conditions). Equally significant is the point that interproperty relationships are outside the boundary conditions of almost all current categorization models (including prototype, exemplar, and rule-based models). Therefore, these models currently have limited generality, and this limitation is most evident where one might most want to generalize—meaningful stimuli.

In summary, although exemplar models show a lot of promise (see Smith & Zarate, 1992, for a recent informative extension of exemplar models to social categories), this section has revealed two important problems and limitations. One is that people employ rules and strategies, even for artificial, relatively meaningless stimuli. The other limitation (which also applies to all the other models that have been under discussion) is that exemplar models have only been developed for contexts where the features of category members are independent. That is, interproperty relationships are not addressed. The latter problem is quite serious, as one would like to move freely between natural and artificial stimuli in our analyses of concepts. To do this, we need to address meaningful stimuli and attendant issues of relational coding. In the next section this issue will be developed further, both with respect to ideas about similarity and with respect to the role of knowledge structures in category organization.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123011602500150

Semantic Differential

Andrea Ploder, Anja Eder, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Development and Early Career (1940s and 1950s)

The origins of the SD trace back to research on synesthesia – a neural linkage of two or more sensory or cognitive pathways. This linkage can result in the perceptive entanglement of very different entities – such as ‘the number two,’ ‘green,’ and ‘warm.’ With help of a primitive SD (two poles without scale), Theodore F. Karwoski (an early teacher and intellectual guide of Osgood, Osgood/Tzeng, 1990: 3), Henry S. Odbert, and others found out that some ‘quasisynesthetic’ phenomena are very widely spread. Their findings indicate that many semantic relations are shared by a very high percentage of people, independent from their cultural background (e.g., large–loud, near–fast, bright–happy, bad–dark, etc.). In the early 1940s, Osgood worked on synesthetic thinking together with Karwoski (Karwoski et al., 1942) and started to develop a scaling technique together with his friend and colleague Ross Stagner (Osgood and Stagner, 1941; Stagner and Osgood, 1941). In 1946, Stagner and Osgood used this technique for a study on social stereotypes. Previous studies had suggested a continuum between the poles, so they introduced a seven-step scale with a neutral zero point. In order to find out more about the forces behind the association game, they conducted a large test with 100 college students and 1000 items. They applied factor analysis and found three central factors with maximal differentiating power: evaluation, strength (later: potency), and activity (Figure 4). In the dimension of evaluation, adjective pairs like ‘good–bad’ and ‘beautiful–ugly’ showed high factor loadings. Pairs like ‘large–small’ and ‘strong–weak’ clustered in a separate dimension, which Stagner and Osgood called strength. Pairs like ‘fast–slow’ and ‘active–passive’ were assigned to the factor of activity. Together, the three factors made up the so-called EPA structure. In the following years, Osgood and his colleagues found many similar factor structures in other projects and started to assume that EPA represents the ‘general dimensions of meaning.’

In 1952, Osgood introduced the SD as a tool for objective measurement of meaning. He was convinced that meaning is an important issue for behaviorist psychology but could not be captured with the methodological tools of the time. He discussed several theories of meaning and ended up with the following definition: “Words represent things because they produce some replica of the actual behavior toward these things, as a mediation process. This is the crucial identification, the mechanism that ties particular signs to particular stimulus-objects and not to others” (204). This conception is clearly neobehaviorist. It admits that cognitive processes affect human behavior but – apart from that – stays within the framework of behaviorist thinking. According to this definition, the central semantic relation ties a sign to an object – and the object stimulates a certain behavior. If the object is not at hand, the sign produces a cognitive replication – not of the object, but of the behavior toward the object – which we call meaning. In other words, the meaning of a sign is a mediator between the sign and a particular stimulus–response. For Osgood's purpose, it is crucial that the cognitive replication of a sign does not exhaust in its denotative meaning but always carries additional associations – its connotative meaning. In his widely received textbook on Methods and Theory in Experimental Psychology (1953), he explained why this kind of cognitive response is most likely to be the carrier of meaning: it is easy to perform, requires little energy, and therefore needs only little reinforcement to be learned. In the SD, Osgood exploited this idea methodologically, assuming that every mediational meaning response operates within a universal semantic space, which his scales claim to represent.

After a discussion of several methodological possibilities for the measurement of meaning, Osgood advocated a scaling approach. Scaling, he argued, provides for “controlled association in which the nature of the association is specified by definition of the scales (…) but the direction and intensity of association is unspecified” (222). Finally, he discussed the origins, logic, and merits of the SD and called for further research on its objectivity, reliability, validity, sensitivity, comparability, and utility (230 ff.). As will be shown, a lot of papers answered to that call. Especially between 1964 and 1974, the number of publications on the merits and limits of the SD were enormously high (see Figure 5).

Which of the following is a similarity between political deviance and personal aggression?

Figure 5. Number of SSCI articles on the SD, 1950–2013.

Data source: Social Science Citation Index (‘Semantic Differential’ in title (black); ‘Semantic Differential’ as a topic (gray).

In a 1954 case report for the Journal of Abnormal and Social Psychology (A Blind Analysis of a Case of Multiple Personality Using the Semantic Differential), Osgood and Zella Luria applied the SD to data of a multiple personality case. The material they worked with was data on patients with multiple personalities. The case of ‘Eve’ was very prominent in 1950s psychology (several teams of researchers worked on the case and in 1957, the story of ‘Eve’ is even be made into a film), and also this paper received wide attention. Osgood and Luria compared the semantic structure of Eves' three personalities, based on six SD testings with 15 concepts and 10 scales. The research design was very experimental and established a new area of application for the SD: clinical personality and psychotherapy research. Osgood did not systematically follow this line of research, but he used the data and his major findings in his 1957 book and came back to the case in the 1970s (see Osgood et al., 1976).

Until today, the most important work of reference for the SD is The Measurement of Meaning – a book published in 1957 by Osgood, George J. Suci, and Percy H. Tannenbaum. The volume summed up the central insights from the first years of research on the SD and can be read as a systematic introduction to its theory, methodology, application, and evaluation. Towards the end, the authors addressed some of the most prominent points of criticism and gave an outlook to important tasks for future SD research. The book received good reviews and reached the status of an academic bestseller.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868032311

Measuring addiction propensity and severity: The need for a new instrument

Kevin P. Conway, ... Michael Neale, in Drug and Alcohol Dependence, 2010

Fourth, most extant instruments were constructed using classical test theory methodology, which has its own psychometric limitations that directly bear on severity measurement. The use of factor analysis, for example, does not utilize the full power of modern scaling techniques (Embretson and Reise, 2000; Hambleton and Swaminathan, 1985; Lord and Novick, 1968) that take item properties into account (e.g., whether one item indicates greater “severity” than another item). While producing scales with high levels of internal consistency, the use of factor analytic methods alone does not allow one to build instruments to discriminate individuals along selected ranges of an underlying trait.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0376871610001122

A review of regional science applications of satellite remote sensing in urban settings

Jorge E. Patino, Juan C. Duque, in Computers, Environment and Urban Systems, 2013

3.5 Population estimation

An understanding of the population size and spatial distribution in urban areas is essential for social, economic and environmental applications (Li & Weng, 2005; Liu, Kyriakidis, & Goodchild, 2008). The use of remote sensing for population estimation started in the mid-1950s as a way to address the shortcomings of the decennial census, such as its high cost, low frequency and intense labor requirements (Liu & Herold, 2006, chap. 13). Regional-scale population estimates can be achieved through the quantification of urbanized built-up area from remote sensing data using statistical relationships with the total settlement population. In urban areas, population estimation by remote sensing often involve counting dwelling units, measuring land areas, and classifying land use. The last requirement results from the close correlation of land use with population density in urban areas. Ground data on the average number of persons per dwelling unit and for different land uses are always needed to calibrate models (Jensen & Cowen, 1999). The general strategy to achieve population estimations from remote sensing data is to establish statistical relationships between census or field survey data and remotely sensed data using regression analysis techniques (Harvey, 2002; Liu et al., 2006; Lo, 1995; Pozzi & Small, 2005; Yuan, Smith, & Limp, 1997). Data from SPOT, Landsat TM and ETM+ sensors have been used along with census or survey data on population counts to estimate population size. Recently, building heights derived from LiDAR (Light Detection and Ranging) data and very high resolution imagery have also been used to estimate population counts at an intra-urban scale.

Lo (1995) developed methods to extract population and dwelling unit data from a SPOT image of the Kowloon metropolitan area in Hong Kong. Different regression models were applied to link the spectral radiance values of image pixels with population densities. Accurate estimates of population and dwelling units were obtained at the macro level for the whole study area, but intra-urban variations and micro-level estimates tended to be of low accuracy because of the difficulty of discriminating residential from non-residential use in multifunctional buildings in the satellite image. In a regional-scale study, Yuan et al. (1997) found high correlations between land cover classified from a Landsat TM image and population counts from census data. By applying regression and scaling techniques, they were able to obtain a population distribution map with much more detail than census data alone offered for four counties in central Arkansas.

Harvey (2002) used a Landsat TM image and census data for Ballarat and Geelong, Australia, to allocate population estimates to each pixel of the image and overcome the problem of spatial aggregation of census data. The satellite image was first classified into residential and non-residential classes, and initial reference populations were distributed uniformly for each census zone across its residential pixels only. Then, pixel populations were related to pixel spectral values and re-estimated pixel populations using an expectation–maximization regression algorithm, and the regression equation was tested in a second image. The relative error of global estimations was less than 1% in both images, but it rose to approximately 16% and 21% in the individual census zones. Wu and Murray (2005) followed a similar approach with a Landsat ETM+ image, but instead of using pixel spectral values, they used the fraction of impervious surfaces in residential areas. Li and Weng (2005) integrated a Landsat ETM+ image and census data for estimating intra-urban variations in population density in Indianapolis, Indiana, using several remote sensing-derived variables as predictive indicators, correlation analysis to explore their relationships with population data, and step-wise regression analysis to develop models. Liu et al. (2006) stated that these results were valuable for improving population estimations, but the spatial resolution of 30 m limits their utility in urban applications. Liu and Herold (2006, chap. 13) highlighted the use of high spatial resolution satellite imagery to study intra-urban population characteristics.

Although very high spatial resolution imagery has been available for at least 10 years, since the launch of the Ikonos and Quickbird satellite programs, little research has explored the use of this type of imagery for population estimation purposes (Liu et al., 2008). Liu et al. (2006) and Galeon (2008) are two examples of this type of research. Liu et al. (2006) explored the correlation between census population density and image texture with a very high spatial resolution Ikonos image of Santa Barbara, California, using linear regression. The spatial unit used in this analysis was the census block with homogeneous land use. Different methods for describing image texture were tested, and the correlation was found to vary depending on the method used. The obtained correlation between image texture and population was not strong enough to predict or forecast residential population size, but the researchers concluded that image texture could be used to refine census-reported population distributions using remote sensing and to support smart interpolation programs to estimate human population distributions in areas where detailed information is not available. Galeon (2008) used a Quickbird satellite image to estimate the population size in informal settlements using a field survey and regression analysis for the University of Philippines campus area. The researchers obtained accurate estimates of population size with first-order equations for the slum areas but not for the semi-formal housing areas present on the campus.

Although LiDAR techniques have been available since the 1960s, they have only become commonly used in the past few years (Lwin & Murayama, 2009). LiDAR data can now provide very accurate height information for land surface features such as buildings and trees. Digital volume models (DVMs) derived from LiDAR data are increasingly being used for population estimation at the urban scale and are being integrated with very high resolution imagery with good results (Qiu, Sridharan, & Chun, 2010; Lwin & Murayama, 2009, 2011, chap. 6; Ramesh, 2009; Weng, 2012). SPOT and Landsat TM imagery has been popular among population estimation applications of satellite remote sensing because of its relative success in regional- and medium-scale studies. At the urban or intra-urban scales, further research is needed to establish the best methods and procedures for population estimation, taking advantage of the very high spatial resolution satellite imagery and LiDAR data that are now widely available. Ground truth, in the form of field surveys and census data, is always needed, but the goal is to achieve the highest possible accuracy while minimizing fieldwork to keep the research both cost effective and operationally practical.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0198971512000567

Which of the following is a similarity between political deviance in personal aggression?

Which of the following is a similarity between political deviance and personal aggression? Both involve the theft of company merchandise.

Which of the following statements best explains the carrot and stick approach of the US Sentencing Commission guidelines for organizations?

Which of the following statements best explains the carrot-and-stick approach of the U.S. Sentencing Commission Guidelines for Organizations? Delayed product delivery is less of an issue when compared to delivering a faulty product, which can potentially cause harm.

What is the last step in the basic model of ethical decision making?

The last stage of this process is the adaptation stage. In this stage, the clinician will look to adapt the selection or solution of the ethical dilemma by refining it, or by returning to the evaluation and selection stages to find and choose a better solution.

Which of the following is an objective of ethics training?

The purpose of Ethics Training is "to enable employees to identify and deal with ethical problems developing their moral intuitions, which are implicit in everyday choices and actions" (Sacconi, de Colle & Baldin: The Q-RES Guidelines for Management, 2002).