CAE Reading & Use of English — Practice Test 1
Cambridge C1 Advanced | 56 Questions | 60 Minutes | Reading: Parts 1, 5–8 | Use of English: Parts 2–4
Cambridge C1 Advanced | 56 Questions | 60 Minutes | Reading: Parts 1, 5–8 | Use of English: Parts 2–4
The idea that language shapes the way we perceive and categorise the world has long (1) a fascination for linguists and philosophers alike. The Sapir-Whorf hypothesis — named after the American linguist Edward Sapir and his student Benjamin Lee Whorf — (2) the claim that the language one speaks influences — and in its stronger form, determines — the concepts one is able to think. The hypothesis fell (3) in the mid-twentieth century, when Chomsky's work on universal grammar suggested that all humans share a common underlying cognitive architecture that language merely expresses. Yet it has (4) a remarkable comeback in recent decades, driven by experimental work on colour perception, spatial reasoning and the representation of time. Speakers of languages that (5) a distinct word for blue, for instance, have been shown to process colour in measurably different ways from those whose languages draw a clear (6) between the two colours. The debate remains (7) , but the shift from asking whether language determines thought to asking how much it (8) it has allowed for a more nuanced and productive research programme.
Every morning, before most people have finished their first cup of coffee, they have already made dozens of decisions — what to wear, what to eat, whether to check the news. These choices feel effortless, spontaneous, the simple product of preference. Yet beneath this surface ease lies an extraordinarily complex machinery of judgement, one that researchers in cognitive psychology have spent decades attempting to map. What they have found is both reassuring and troubling: the human mind is not the rational deliberating apparatus we like to believe it is.
The foundational insight came in the 1970s from Daniel Kahneman and Amos Tversky, whose collaboration yielded the prospect theory — a framework that overturned the prevailing economic model of human behaviour. Classical economics had assumed that people weigh options according to their expected utility, calculating likely outcomes and choosing whichever maximises long-term gain. Kahneman and Tversky demonstrated, through a series of elegantly designed experiments, that this is rarely what happens. Instead, people evaluate outcomes relative to a reference point — typically the status quo — and feel losses far more acutely than equivalent gains. The asymmetry is consistent and powerful: losing twenty pounds feels approximately twice as bad as gaining twenty pounds feels good.
This discovery opened the door to what became known as behavioural economics — a discipline that has since generated an enormous body of evidence on systematic irrationality. Perhaps the most counterintuitive finding is that having more options does not make people happier or more effective at choosing. Barry Schwartz, whose 2004 book The Paradox of Choice brought this idea to a popular audience, argued that an abundance of options creates cognitive paralysis. When faced with fourteen varieties of jam, shoppers are both less likely to make a purchase and less satisfied when they do. Choice, it appears, is a burden as much as it is a freedom.
Compounding this is the phenomenon of cognitive bias — systematic patterns of deviation from rational judgement that affect even the most informed and experienced decision-makers. Confirmation bias leads us to seek out information that supports what we already believe, while the availability heuristic causes us to overestimate the likelihood of events that spring readily to mind, whether or not they are statistically probable. A person who has recently read about a plane crash will almost certainly overestimate aviation risk, however carefully they may consider the evidence. These biases are not errors in the conventional sense — they are features of the cognitive architecture that, in many environments, served our ancestors well. Quick, intuitive judgements about predators required no spreadsheet.
Yet the environment in which humans now operate has changed dramatically. The data-saturated, option-rich landscape of modern life places demands on the decision-making system that it was not designed to meet. Richard Thaler and Cass Sunstein have argued that this mismatch can be partially corrected through so-called nudge theory — subtle adjustments to the way choices are presented that steer people towards better decisions without restricting their freedom. Making organ donation opt-out rather than opt-in, placing healthy food at eye level in canteens, automatically enrolling employees in pension schemes: these are nudges that exploit our cognitive shortcuts rather than fighting them. Critics, however, worry that such approaches, however well-intentioned, represent a form of manipulation that bypasses the very deliberative faculties that separate us from other animals.
The deepest question raised by this research is whether self-awareness can serve as a corrective. If we know that we are prone to anchoring — the tendency to rely too heavily on the first piece of information we encounter — can we deliberately adjust? The evidence is mixed. Some studies suggest that merely knowing about a bias is insufficient to counteract it; the effects persist regardless of awareness. Others find that structured reflection and specific training can reduce their influence over time. What seems clear is that the mind cannot step entirely outside its own architecture. We can learn to navigate its shortcuts more skilfully, but we cannot dismantle them — and perhaps, given their deep evolutionary roots, we should not wish to.
Cities are not, in the ecological imagination, places of abundance. They are grey zones of conquest, where concrete overwhelmed meadow, where rivers were culverted and birds displaced by traffic. Yet something is changing. Across Europe and beyond, municipalities are beginning to reimagine urban space not as the antithesis of the natural world, but as a potential refuge for it.
Gap 41
The ecological case is relatively straightforward. Cities, for all their hard surfaces, contain a surprising density of microhabitats — south-facing walls, railway embankments, derelict lots, ornamental ponds — that can support significant biodiversity if managed with that goal in mind. Studies have found that urban areas can, in some cases, host greater species richness than the intensively farmed countryside that surrounds them.
Gap 42
The most visible manifestation of this movement is the proliferation of urban meadows — areas where grass is allowed to grow long and flowering plants are encouraged rather than suppressed. London's Camley Street Natural Park, a two-acre reserve beside St Pancras station, has become something of a model for what is possible in even the most constrained of urban environments. It supports over three hundred plant species and numerous invertebrates, all within earshot of Eurostar trains.
Gap 43
Not everyone has welcomed this development. Some residents find unmown grass and tangled bramble unsettling — a sign of neglect rather than intent. Local authorities have discovered that the politics of rewilding can be surprisingly fraught: communities that are enthusiastic about biodiversity in principle can object strongly to it in their own backyard, particularly when it involves the loss of well-kept ornamental planting that they associate with civic pride and safety.
Gap 44
The water dimension of urban rewilding is perhaps its most transformative aspect. Rivers that were buried underground in the nineteenth and twentieth centuries are being unearthed and restored to something approaching their natural courses. The daylighting of Seoul's Cheonggyecheon stream — a six-kilometre waterway that had been covered by an elevated highway since the 1970s — became an iconic example of what is possible when city planners think at scale about ecological restoration.
Gap 45
What distinguishes the most successful urban rewilding projects is not simply the quality of their ecological design but their capacity to bring communities with them. In Zurich, where the city has committed to a network of ecological corridors linking green spaces across the urban fabric, public engagement has been built into every phase of planning. Residents can adopt a patch of land, record species sightings via a city app, and attend regular events that translate ecological data into tangible local stories.
Gap 46
The scale of ambition is still modest compared to the ecological challenges cities face. A wildflower meadow beside a ring road does not offset the vast footprint of urban resource consumption. But something important is being demonstrated: that the city and the natural world are not irreconcilable opposites, and that the relationship between them can, with sufficient will, be one of cautious cohabitation.
The desire to represent the world in visual form is of the oldest human impulses, predating writing by several millennia. The earliest known maps, scratched clay tablets in ancient Mesopotamia, were concerned less with geographical precision than with establishing boundaries of ownership and authority. For much of human history, the map was an instrument of power as as a tool of navigation. It was only the emergence of systematic surveying and trigonometry in the seventeenth and eighteenth centuries that cartography began to aspire to scientific accuracy. Even then, the choices made by mapmakers — which territories to centre, scale to apply, which features to include — reflected particular perspectives and priorities. The supposedly neutral projection remains a contested political object this day. Digital mapping has democratised access to geographical information a degree that would have been inconceivable to earlier cartographers, while simultaneously raising new questions about surveillance, data ownership and the power to define what is shown.
In the digital age, human attention has become the most (COMPETE) resource on earth. Technology companies do not sell products to their users in the traditional sense; they sell their users' (ATTEND) to advertisers. The design principles of social media platforms — infinite scroll, variable reward schedules, social validation metrics — are not (ACCIDENT). They reflect years of deliberate engineering aimed at maximising the time users spend engaged with each platform. Critics argue that this constitutes a form of (MANIPULATE) that the public is largely (AWARE) of. The (PROFIT) nature of human attention, once understood, raises profound questions about individual (AUTONOMOUS) in an age when our psychological responses are so thoroughly (DOCUMENT) and exploited.
Make sure you have answered all questions and entered your name above before submitting. Your answers will be sent securely to your teacher.