Sunday, December 16, 2018

#el30: A Communal Experience

This is my community experience for #el30, in which Stephen asks us to "create an assignment the completion of which denotes being a member of the community."

I am still focused on text, so I took one post that mentioned community in either the title or the text from the following #el30 blogs:
I then entered each link into a new analysis space at Voyant-tools to create a collection of #el30 posts about community. For the sake of this particular analysis, I removed the names of months from the word cloud as they were clouding (pun intended) the results. Voyant generated the following word cloud:

The word cloud presents the most common nouns and verbs from all of the posts; however, the word cloud is live, which means you can change it. Click on the Scale drop down in the lower left corner to select a specific post, and slide the Terms slider to include more or fewer words. Your assignment is to play with the various posts and collections of terms to create different word clouds and to see if any meaning emerges for you. Then leave a comment on this post to tell the rest of us what you learned.

My failure to post most weeks during the MOOC does not reflect my interest; rather, I'm at the end of my school term, in the middle of some vexing family issues, and about to leave for two weeks in the Bahamas (so don't feel sorry for me). I just couldn't focus on writing, but I did much of the reading and watched most of the videos. I'll carry this conversation forward for the next half year, I suspect.

Thanks to Stephen and all for doing this.

Saturday, November 17, 2018

#el30: Prepositions on the Edge of Identity

Last week, Stephen Downes assigned an identity graph for those participating in #el30. Like Jenny Mackness and Mathias Melcher, I was initially perplexed that the graph “should not contain a self-referential node titled ‘me’ or ‘self’ or anything similar”. Surely, I thought, any picture of my connections should include me, right? Then I had a light-bulb moment and realized that the web of connections, the graph, was me, and that this view of identity is in keeping with Downes' connectivism theory which says, among other things, that meaning emerges within the network of relationships (edges) among nodes rather than in a single node itself. I subscribe to this belief, but old mental habits are difficult to break. I still want to see me as … well, just me, a single, individual node. So building an identity graph could be a therapeutic exercise for me.

I examined the graphs built by others in #el30 for some clues about how to go about this—I always like a model to use even if I intend to violate the model. Melcher based his identity graph on his Twitter and library interests. Mackness used a variety of life events, roles, and locations. Roland Legrand based his map on his spiritual/philosophical beliefs and life roles. I found them all to be wonderful insights into the people who created them, but none of them clicked for me—not wrong, mind you, just no click.

For one thing, I was troubled by the edges, the links, between the nodes. The nodes at least have labels, but the links are nothing more than a thin line from one node to another. This strikes me as a serious oversight. If the meaning is in the relationships, then the links ought to mean something. In most of the graphs I've examined and the tools I've tried for generating graphs, the links are just skinny little lines. At best, they might have an arrow to indicate directional flow. That did not satisfy me.

Then, I teach writing, and I write. Writing seems to be a solid chunk of reality out of which to build an identity graph, and the chunk is definitely related to how I identify myself. Moreover, writing includes those built-in links (prepositions, conjunctions, commas, and other linking devices) that can add texture and color to the edges. Of course, most language scholars (both poetic and rhetoric) tend to favor the nodes—the nouns and verbs, or actors and actions—of writing and ignore the little words. We don't capitalize them in titles, for instance (Gone with the Wind); yet, it's the little words that connect the big words to each other and create much of the meaning, as I discussed in a handful of posts as part of Rhizo14 four years ago. Prepositions were on my mind because of some remarks by Michel Serres. In his book Conversations on Science, Culture, and Time (1995) with Bruno Latour, Serres suggests that prepositions mean almost nothing or almost anything, which turns out to be about the same thing, but that they do the critical work of arranging and connecting the actors, actions, and settings. It seemed rich at the time, but I did not pursue the ideas very far.

In his earlier essay "Platonic Dialogue" in Hermes: Literature, Science, Philosophy (1982), Serres says, "writing is first and foremost a drawing, an ideogram, or a conventional graph" (65). I do not think that Serres is speaking of graphs as we have this week in #el30—he almost certainly means something like a mark or picture—but I want to play with this connection between writing and graphs. My intent is to build an identity graph using the #el30 posts that I have written thus far. The four posts result in a fairly short 3,147 word document when the text is aggregated. I'm using Voyant tools to analyze the #el30 text, which I also used back in Rhizo14, and you can see my Voyant dashboard here. I also used a dashboard that distinguished each post here. This dashboard has some interesting data about my posts as posts within my blog, but I will not use this dashboard in this post.

Unfortunately, Voyant by default uses a stopword list to eliminate prepositions, conjunctions, and other classes of small words from its tools, deeming those words as irrelevant and mostly meaningless. The documentation for Voyant says it this way: "Typically stopword lists contain so-called function words that don’t carry as much meaning, such as determiners and prepositions (in, to, from, etc.)." I'm dismayed but not surprised. Prepositions get no respect from writers, rhetors, and grammarians. However, I intend to use prepositions as edges in my identity graph. I suspect that the prepositions and other connectors will give the links texture, color, spin, and direction that will enrich the meaning of the local connection and the network of connections.

You can see a word cloud of my posts here:

We are all familiar with word clouds, but I'm thinking now that they are proto-graphs with all the nodes and none of the edges. Thus, they are limited in what they reveal. I do, however, like the different sizes and colors of the nodes, and I think I want a graph tool that keeps the different sizes and colors of the nodes and includes the different edges. Voyant suite of tools does not quite do that—or at least, I have not found the tool that does.

So I'm following Jenny Mackness' lead and also using Matthias Melcher’s think tool – Thought Condensr. I thought I would map the top 5 words in my posts, but I managed to do just one: data, the most common major word in my four posts. The graph looks like this:

My writing over the past month reveals a preoccupation with data (the most common of the big words in my posts at 39 occurrences), and the identity graph above expresses my particular orchestration of nodes and edges that identifies me like a thumbprint. Everyone in #el30 is interested in and thinks about data, but I daresay that none have a print like mine. Yes, they have similar prints, perhaps, but not exactly this one. Just as all fingerprints have lines, arches, loops, and whorls, none have them arranged in the same way. That graph above identifies Keith Hamon—or at least a bit of him at a certain scale. This graph orchestrates drowning in data from backyard (in general, read the node/edge clusters from the blue node on the left, through a green connector, then to the red data node, and on to another green connector and a yellow node) with the other clusters of nodes and links to create a unique yet still recognizable fractal image.

Unfortunately, I cannot identify a given cluster of nodes and edges, so you can actually create new ones by following left into data and then out to any other node. You can also read from right to left to create even more clusters that generate different meanings. I think these are limitations of the graphing tools, my skills with the tools, or both. I need a graphing tool that will allow me to identify both nodes and edges and the resulting clusters and to view them in a 3-D or 4-D space. 2-D is too limiting.

I realize, however, that I've made the same error that the Voyant Tools creators did: I've put the noun (in this case, data) in the center, putting all the focus on it and building all the meaning around it. I should have put the focus on one of the prepositions—say, of with its 104 occurrences. I simply do not have time just now to graph all 104 instances of of, so I did just the first 10, and it looks like this:

Look at what a workhorse this little word of is. Consider how it connects all these nodes to create meaning at various scales, to make this particular arrangement of nodes and edges identifiably me. Consider one cluster: University of Miami. All of us in #el30 have some university node, but I may be the only one with a UM node. Even if another of us has a UM node, all the other nodes stitched together by of quickly identify me. I'm the one with a University of Miami node and a movement of energy, matter, information, and organization node. Add the 102 other of clusters, and you've pinned me to the wall. That's me.

I'm not really satisfied with these graphs, but I think they are a wonderful start to thinking about writing and how it creates an identity. I'm very happy Downes assigned this. I'm even happier that I tried it. Seems it was great therapy and substantial learning for me.

Sunday, November 4, 2018

#el30: Interpreting the Cloud

The point of the computing cloud for me has been the continued abstraction of data and services from the computing platform. I've been using computers since early 1980s (In 1982, I wrote my dissertation on the University of Miami's UNIVAC 1100), and I became a Mac user in 1987, so I am well-versed in the problems with exchanging data on one platform with users on another platform. I'm glad those wars are mostly over. I now use a PC at work, a MacBook Pro at home, an Asus Chromebook on the road, and an iPhone everywhere. The underlying hardware and operating systems are almost transparent windows to my online data, documents, and communications. I'm writing this post at home on my MacBook, but I've written posts on all my devices, including my iPhone. I also no longer ask my students what kind of device they have when I make an assignment as they all have at least a smartphone (again, I don't care which) that will let them access the class wiki and do the work. However, they do need a Google account to do most of the work.

And here is the one more platform layer that I want to remove: Google (or Facebook or Twitter). Some of the technologies that Tony Hirst and Stephen Downes discussed in their video chat (over Google, of course) seem to be taking the first steps toward separating a cloud service (say, video chat) from a monolithic platform such as Google. This continues a long progression in computing: we were freed from particular hardware, then from particular operating systems, and maybe soon from particular cloud platforms. So someday I may be able to fire up a container (made by Tony Hirst and released into the commons) on any of my half-dozen devices and hold a video chat with others peer-to-peer on their different devices and containers. I may even write my own containers for special services and release those containers into the commons where they can be used or remixed into different containers to render different services.


My understanding of complex systems is all about the movement of energy, matter, information, and organization within and among systems. As a complex system myself, I self-organize and endure only to the degree that I can sustain the flows of energy (think food) and information (think EL 3.0) through me. The cloud is primarily about flows of information, and the assumption I hear in Stephen's discussion is that I, an individual, should be able to control that flow of information rather than some other person or group (say, Facebook) and that I should be able to control the flow of information both into and out of me. I find this idea of self-control, or self-organization, problematic—mostly because it is not absolute. As far as I know, only black holes absolutely command their own spaces, taking in whatever energy and information they like and giving out nothing (well, almost nothing—seems that even black holes may not be absolute).

It helps me to walk outside for discussions such as this, so come with me into my backyard for a moment. The day is cool and sunny, so I'm soaking in lots of energy from sunlight. I've had a great breakfast, so more energy. I've read all the posts about the cloud in the #el30 feed, so I have lots of information. Of course, I'm pulling in petabytes of data from my backyard, though I'm conscious of only a small bit. Even with the bright light, I can see only a sliver of the available bandwidth. I hear only a little of what is here, and I certainly don't hear the cosmic background radiation, the echo of the big bang that is still resonating throughout the universe. I'm awash in energy and information. I always have been. Furthermore, I can absorb and process only a bit (pun intended) of the data and energy streams flowing around me, and very little of this absorption is my choice. Yes, if the Sun is too bright, I can go back inside, put on more clothing, or put on sunscreen, but really, what have I to do about the flow of energy from the Sun? And what have I to do with the house to go into, the clothing to put on, or the sunscreen. All of those things are complex systems that came to me through other complex systems (bank loans, retail stores, manufacturing factories, Amazon, and my own income streams). Most of the energy and information streams that I tap into owe little to me, not even the energy and information that I feedback.

In his post "Post-it found! the low-tech side of eLearning 3.0 ;-)", AK quotes George Siemens as saying something like "what information abundance consumes is attention", and this gets me, I hope, to a point about all this: Siemens is talking about only a tiny subset of information available to me, even though it tends to be the information that consumes most of my attention. There are other far more important streams of energy and information that I should attend to, I think.

Ahh ... maybe this is my point: even if I can avail myself of more access to more information, I'm already drowning in data. What I desperately need are better filters for selecting among the data and better models for organizing that selected data into useful, actionable knowledge. This is what my students need. Everyone in the U.S. needs better filters and models, especially with national elections on the horizon. In this sense, we are not so different from all the humans and other living creatures who have existed, except that our social systems are so much more complex and complicated than those that came before. What data do I trust, and after I've determined that, how do I arrange this data into actionable knowledge? Facebook and Google are filtering data for me now, and they are even arranging that data into actionable knowledge, but I don't think I trust them. Can the cloud help me interpret the cloud?

Saturday, October 27, 2018

#el30 Data and Models

I should be grading student documents this morning, but I'm thinking about #el30. I may have an assessment of that next week.

Anyway, as I was reading some posts about Data, I was struggling with our previous discussion about the differences between human and machine learning, when something that AK wrote sparked some coherent ideas (at least dimly coherent for my part). AK said: "This got me thinking about the onus (read: hassle) of tracking down your learning experiences as a learner. ... As a learner I don't really care about tracking my own learning experiences."

I thought, no, I, too, don't want to track all my learning experiences. Tracking all those experiences would take all my time, leaving no time for more learning, much less time for grading my students' papers. So maybe computers can be useful for tracking my learning experiences for me? A computer can attend me--say, strapped to my wrist, in my pocket, or embedded in my brain--and collect data about whatever my learning experiences are. After all, computers can collect, aggregate, and process data much faster than I can, and as Jenny notes, computers don't get tired.

But what data does a computer identify and collect? Even the fastest computer cannot collect all the bits of data involved in even the simplest learning task. How will the computer know when I'm learning this and not that? Well, the computer will collect the data that some human told it to collect. Can the computer choose to collect different data if the situation changes, as it certainly will? Perhaps. But again, it can only ever collect a subset of data. How will it know which is the relevant, useful subset? The computer's subset of data may be quantitatively larger than my subset, but will it be qualitatively better? How might I answer that question?

Turning experience into data is a big issue, and I want to know how the xAPI manages it. Making data of experience requires a model of experience, and a model always leaves out most of the experience. The hope, of course, is that the model captures enough of the experience to be useful, but then that utility is always tempered by the larger situation within which the learning and tracking take place. Can a computer generate a better model than I can? Not yet, I don't think.

If both the computer and I are peering into an infinity of experience, and I can capture only about six feet in data while the computer can capture sixty feet, or even six hundred feet, we are both still damned near blind quantitatively speaking. Reality goes a long way out, and there is still something about constructing models to capture that reality that humans have to do.

I've no doubt that computers will help us see farther and wider than we do now, just as telescopes and microscopes helped us. I've also no doubt that computers will help us analyze and find patterns in that additional data, but I'm not yet convinced that computers will create better models of reality without us. When I see two computers arriving at different views of Donald Trump and arguing about their respective views, then I might change my mind.

The #MeToo Text: From Documents to Distributed Data #el30

This week's Electronic Learning 3.0 task is about distributed data, and it gives me a way to think about the #MeToo document that has occupied me for the past year and that has been the topic of several posts in this blog. In short, I take the #MeToo text (all several million tweets of it and more) to represent a new kind of distributed document that is emerging on the Net. Thus, it may be a manifestation of the kind of shift in how we handle data that Downes discusses.

Downes introduces his topic this way:
This week the course addresses two conceptual challenges: first, the shift in our understanding of content from documents to data; and second, the shift in our understanding of data from centralized to decentralized. 
The first shift allows us to think of content - and hence, our knowledge - as dynamic, as being updated and adapted in the light of changes and events. The second allows us to think of data - and hence, of our record of that knowledge - as distributed, as being copied and shared and circulated as and when needed around the world.
I teach writing--both the writing of one's own and the writings of others--which since the advent of Western rhetoric in Greece some three thousand years ago has focused on centralized documents. By that I mean that the function of a document (this blog post, for instance, or a poem or report) was to gather data, organize that data into a format appropriate for a given rhetorical situation, and then present that data in a single spoken or written text. This is generally what I teach my students to do in first-year college composition. This is what I'm trying to do now in this blog post. This is, at least in part, what Downes has done in his Electronic Learning 3.0 web site. Most Western communications has been built on the ground of individual documents or a corpus of documents (think The Bible, for instance, or the Mishnah or the poems of John Berryman).

This idea of a centralized document carries several assumptions that are being challenged by the emergence of distributed data, I think. First, the Western document assumes a unified author--either a single person or a coherent group of people. Western rhetoric has a strong tendency to enforce unity even where it does not exist (think of the effort to subsume the different writers of The Bible, for instance, under the single author God). The Western notion of author-ity still follows from this notion of a single, unified author, and the value and success of the document depends in great part upon the perceived authority of this author.

Along with a single, unified author, the Western document assumes a unity within itself. A document is supposed to be self-contained, self-sufficient. It is supposed to include within it all the data that is necessary for a reader to understand its theme or thesis. I don't believe that any document has ever been self-sufficient, but this is the ideal. A text should be coherent with a controlling theme (poetic) or thesis (rhetoric). The integrity and value of the text is measured by how well the content relates to and supports the theme or thesis.

And of course, a document should have a unity of content. It should have a single narrative, a single experience, a single argument. Fractured, fragmented narratives bother us, and they never make the best-seller lists. Incoherent arguments seldom get an A or get published.

There may be other unities that I could mention, but this is sufficient to make my point that we have a long history of aggregating, storing, and moving data in documents with their implied unities. And then along comes #MeToo: a million tweets and counting over days, weeks, and months. We have this sense that surely #MeToo is hanging together somehow, but is it really a single text?

Well, not in the traditional sense. It has no unified author. Just when we thought that Alyssa Milano started it, we learn that some other woman, Tarana Burke, used the phrase ten years ago. #MeToo isn't even a unified group. A million women are not a unified group. It has no unified thesis. It isn't even an argument. There is no dialectic or rationale. It has no unified content. We think it does because of the single hash tag, but each woman brings a unique set of experiences to her tweet: some have a leer or catcall, some gropings, others rapes or years of beatings. All of them have something different, something unique. They cover the gamut, the field, the space.

#MeToo is a swarm, and we really don't like swarms. Who's speaking here, to whom, and about what? What's the point? And what kind of document is this? How do I read it? How do I respond?

#MeToo is a rhizome, a fractal, and I'm thinking we will come to write and to read this way. We will think this way. Perhaps we always have, and our documents obscured that for us. #MeToo makes explicit a million neurons firing.

And finally, I must recognize that #MeToo could neither have been written nor read without our technology. This way of knowing, thinking, and expressing is possible only with help--in this case, Twitter to write it and somewhat read it--though reading millions of tweets is rather impossible for a single human to do. We need the data analysis powers of our computers to even approach a comprehensive reading of #MeToo. We need something like Valentina D'Efilippo's reading strategies and tools in her article "The anatomy of a hashtag — a visual analysis of the MeToo Movement".

I'm wondering, then, what happens when not only data is distributed and decentralized, but when documents themselves become distributed and decentralized. Is this fake news?

Monday, October 22, 2018

Being Human among Computers: #el30

With a number of other online colleagues, I'm starting a new MOOC with Stephen Downes entitled "E-Learning 3.0". According to Stephen's introduction:
This course introduces the third generation of the web, sometimes called web3, and the impact on e-learning that follows. In this third generation we see greater use of cloud and distributed web technologies as well as open linked data and personal cryptography.
The first week featured a Google Hangout between Stephen in Canada and George Siemens in Australia. I've posted the video here, starting it about seven-and-a-half minutes in to avoid the setup issues.

As Jenny Mackness notes in her blog post about the conversation, Siemens and Downes wax philosophical in their conversation, centering "around what it means to be human and what is human intelligence in a world where machines can learn just as we do."

While I understand the fascination of such a question as computer technologies increasingly approximate many of our intellectual capabilities, in some ways the question seems moot. For me, part of what it means to be human is to use tools and technologies that enhance our innate human capabilities. Admittedly, most of our early tools enhanced our physical capabilities, making us stronger and faster and warmer, but from the beginning, we created technologies that enhanced our intellectual capabilities. I think of language as a technology, and I am not yet convinced that computers will change us more than language in both spoken and written forms has already done. I can almost see computers as a refinement and extension of language, which started with speech, eventually developed into writing—making marks also led to math and drawings—and is being expressed now through computers. Few things distinguish us from other life forms as much as our tools and technologies do.

Did Shakespeare write Hamlet or did the English language? Well, both actually.

Part of the fascination of this question about human vs. computer intelligence comes from our apprehension that computers will become more powerful than we are. This is an old fear, as the American folk tale of John Henry demonstrates, but for me, the lesson of John Henry is that we will continue to use computers to make us smarter despite our fears. I suppose the fearful prospect is if computers will use us to make themselves smarter or if they will simply come to ignore us, having become so smart themselves that our abilities add nothing to them. I don't think they will destroy us; rather, they'll abandon us. This is a problem mostly if you think that humans are the smartest thing in the universe and that computers will usurp our position. It seems rather chauvinistic to think that humans are the crowning achievement in this wondrously large and varied universe. The odds are surely against it, I think.

Almost all complex systems that I know about can learn: taking in information from the ecosystem, processing that information, making structural adjustments to better fit to their environments, and then feeding back information into the ecosystem, which likewise is trying to make a better fit for itself. I have no doubt that computers will do the same, and if our ecosystem comes to include smart machines, then we and the rest of the ecosystem will have to adapt to those new entities. The universe will manage that adaptation quite nicely and count itself more advanced for it.

But that's the long game. In the short game, I am keen to explore how smart machines can help me and my students learn differently, maybe better.

Saturday, July 14, 2018

RhizoRhetoric: ANT Roots

I'm reading Bruno Latour's 2005 book Reassembling the Social: An Introduction to Actor-Network Theory, and the implications for a rhizomatic rhetoric are worth careful exploration over several posts. This is the first.

The first chapter of Latour's book presents his reasons for devising actor-network theory (ANT) and for writing the book: his discomfort with the assumption by conventional sociology of the social as an existing domain within which to embed and define groups. Latour prefers to start with the emerging group to follow the connections and interactions both within the group and with its environment to uncover how the social emerges. To my mind, Latour wants to define from the inside out rather than from the outside in. Following the actual, existing traces of the group's activities means being willing to follow tracks that might not be recognized as social from the perspective of any given social theory.

In her review of the book, Barbara Czarniawska begins with a quote from Giles Deleuze: "There is no more a method for learning than there is a method for finding treasures...(Giles Deleuze, Difference and Repetition, 1968/1997: 165)". I like this nod to Deleuze as recognition of the more open-ended approach both Deleuze and Latour bring to their studies. Learning demands a willingness to re-examine existing structures, points of view, methods, and theories and then to reinforce those that prove helpful and to change or abandon those that prove harmful. Our existing knowledge both enables us to know more and limits what more we can know. When we already know what will happen, then we are more likely to miss what actually happens. Deleuze and Latour are both looking for ways around this dilemma of knowledge. Shunryu Suzuki says it best for me in his book Zen Mind, Beginner's Mind (1973): "In the beginner’s mind there are many possibilities, but in the expert’s there are few." Suzuki, Deleuze, and Latour are, of course, speaking of issues in the complex domain rather than the simple or complicated domains, as defined in Dave Snowden's Cynefin framework. Like these thinkers, I think that most of life is complex, and I'm certain that rhetoric is, despite the myriad attempts by rhetoricians to render it simple or at least merely complicated.

In a sense, then, all of these fellows, and certainly Latour, are resisting the tendency to view life through too narrow a lens, to put our experiments into too small a box, to render simple or no more than complicated that which is rightly complex. Latour makes this clear when he compares the shift in thinking required by ANT with the shift in thinking required by modern physics. He says:
A more extreme way of relating the two schools is to borrow a somewhat tricky parallel from the history of physics and to say that the sociology of the social remains ‘pre-relativist’, while our sociology has to be fully ‘relativist’. In most ordinary cases, for instance situations that change slowly, the pre-relativist framework is perfectly fine and any fixed frame of reference can register action without too much deformation [Cynefin's simple/complicated domains]. But as soon as things accelerate, innovations proliferate, and entities are multiplied [Cynefin's complex/chaotic domains], one then has an absolutist framework generating data that becomes hopelessly messed up. This is when a relativistic solution has to be devised in order to remain able to move between frames of reference and to regain some sort of commensurability between traces coming from frames traveling at very different speeds and acceleration. Since relativity theory is a well-known example of a major shift in our mental apparatus triggered by very basic questions, it can be used as a nice parallel for the ways in which the sociology of associations reverses and generalizes the sociology of the social. (12)
I particularly like this comparison of ANT sociology with modern physics, as it seems to me that modern physics has moved us from the modern world of the Enlightenment and Newton into the postmodern world of Einstein, Bohr, Deleuze, and Carlos Casteneda. I mention Casteneda because he provides the perfect image of ANT years before Latour thought of it. Also, Deleuze and Gauttari mention Casteneda in their book A Thousand Plateaus, where they note that in The Teachings of Don Juan, the Yaqui sorcerer Don Juan Matus gives his student Carlos instructions about how to cultivate a garden of hallucinogenic herbs:
Go first to your old plant and watch carefully the watercourse made by the rain. By now the rain must have carried the seeds far away. Watch the crevices made by the runoff, and from them determine the direction of the flow. Then find the plant that is growing at the farthest point from your plant. All the devil's weed plants that are growing in between are yours. Later … you can extend the size of your territory by following the watercourse from each point along the way. (ATP, 11)
This makes Latour's point quite nicely and most graphically: start with an initial observation of a functioning group, then follow the traces (the watercourses and crevices) that are actually there (not the ones you think should be there based on your fixed, rectangular theory of what a garden should look like), scribbling like mad to capture as much as you can.

Though as often happens, the poets and prophets were there first. In a 1956 interview in The Paris Review, William Faulkner says of theory: "Let the writer take up surgery or bricklaying if he is interested in technique. There is no mechanical way to get the writing done, no shortcut. The young writer would be a fool to follow a theory. Teach yourself by your own mistakes; people learn only by error." He says of his own method for writing Nobel-quality novels: “It begins with a character, usually, and once he stands up on his feet and begins to move, all I can do is trot along behind him with a paper and pencil trying to keep up long enough to put down what he says and does.”

This may be the heart of ANT: start with an observation and trot along behind to see where it goes, what it connects to, and what energy and information it exchanges. There's your novel, your sociology, or your physics. Or your rhetoric.

Czarniawska explains Latour's intentions for his book this way:
The question for social sciences is not, therefore, ‘How social is this?’, but how things, people, and ideas become connected and assembled in larger units. Actor-network theory (ANT) is a guide to the process of answering this question. (1)
Latour devotes much of his first chapter to distinguishing his approach to sociology from established approaches. As Czarniawska says, "Students of the social need to abandon the recent idea that 'social' is a kind of essential property that can be discovered and measured, and return to the etymology of the word, which meant something connected or assembled" (1). Latour says it this way:
Even though most social scientists would prefer to call ‘social’ a homogeneous thing, it’s perfectly acceptable to designate by the same word a trail of associations between heterogeneous elements. Since in both cases the word retains the same origin—from the Latin root socius— it is possible to remain faithful to the original intuitions of the social sciences by redefining sociology not as the ‘science of the social’, but as the tracing of associations. In this meaning of the adjective, social does not designate a thing among other things, like a black sheep among other white sheep, but a type of connection between things that are not themselves social. (5, italics in original)
A social group for Latour is not defined from the outside by measuring how well the group matches a definition of social, regardless of how sophisticated or admirable the definition might be; rather, a social group is defined from the inside as the researcher crawls into the group to trace the associations at work within the group and between the group and its environment. The working out of these associations -- these dynamic exchanges of energy, information, matter, and organization among actors -- define the group. For Latour, this is the work of the ANT sociologist.

I am deeply attracted to this orientation to study, analysis, and understanding, and it helps explain what one of my writing groups tried to do in a recently published paper "Pioneering Alternative Forms of Collaboration", in which we explored how our online group formed to write several documents and presentations about the #rhizo14/15 MOOCs we all participated in. We wrote this particular paper from the inside out, or tried to, and I think we were able to capture a few points that we might have missed had we done a traditional rhetorical study of our work together. In this document, we did not start with a rhetorical definition of how academic scholars should collaborate online to write their documents; rather, we tried to trace what we actually did to see if we could figure out how and why it worked. I'm proud of this paper, though I think we could do a much better job of it now than we did then. Still, for me it was a step in a rewarding direction. And this is worth adding: it was not a destination, just a direction. We will not likely create a swarm method of scholarly writing for other groups to follow, though we may trace a few paths that others may walk, more or less. That remains to be seen.

So like Latour, I can orient myself to my studies by starting with an observation of an actor/action and then tracing as carefully as possible the connections and interactions within the actor and between the actor and its environment of myriad other actors. I will almost certainly rely on my existing models of reality to try to understand the actor/action, but I also must be willing to relax those models to allow for the connections and interactions not included in my model. Like an ant, I must be willing to follow any trail -- especially those that lead to wrong turns and dead-ends on the map of my theory -- for that is precisely when I am positioned to learn.