Monday, July 8, 2019

Webinar on Research Findings

I wanted to let you know that my research into the role of fairness in collaboration is now complete.  I will be giving a webinar summarizing the findings, along with an introduction by Jennie Curtis of the Garfield Foundation. The webinar will be held on July 25 at 11:00 Central Daylight Time. You can register here. And please feel free to invite others. 

To complete the research, I surveyed 85 members from 57 unique networks, engaged in facilitated dialogue with 20 network practitioners, and interviewed 12 thought leaders. Some of the most interesting findings had to do with how men and women experience different levels of fairness, and with concerns about authentic decision-making in networks. You can learn more about my research through previous blog posts I kept along the way. I'm excited to share what I've found and hear your own interpretation of the data!

Monday, January 21, 2019

If It Ain't Broke...But How Do We Know?

One expression I heard a lot growing up was, "if it ain't broke, don't fix it." So far in my research into fairness and collaboration in social-impact networks, this seems like good advice. Overall, this first foray into scoping the world of such networks indicates that, despite some weak spots,  stakeholders largely believe their networks to be fair, collaborative, and effective. But, the data represent a generalized survey among, rather than within, networks. So how can any given network know if things are broke? In this post I  ask for readers' opinions about an idea that's emerging. (Spoiler, there's a 3-question survey you can take to weigh in.)

Toward a Shared, but Context-Derived Evaluation Platform...or "If it's broke, try using what's handy first"

My spouse is the king of reusing materials. Nothing gives him greater pleasure than finding some old, sturdy, piece of metal that can be repurposed to fix something broken in our house. In the world of generative social-impact networks, we've got a lot of useful knowledge than can also be appropriately repurposed by those with sufficient experience. I'd like to offer up some of what I've learned, see especially the 5th point below.


The survey I circulated and which many of you took was part of some larger research to inform the following research questions:

1.      What are the perceptions of fairness among those practicing a generative social-impact network approach? Establishing a baseline for such perceptions allows for then suggesting ways to improve fairness. Result: Perceptions of fairness are high among those stakeholders choosing to take part in the survey.

2.      What accounts for differences in perceptions of fairness? Determining if some networks or some types of members experience higher or lower levels of fairness provides a basis for suggesting best practices. Result: The largest difference in perceptions is that men tend to feel far more strongly than women do that their networks are fair and collaborative places.

3.      Is there a perceived relationship between perceptions of fairness and collaboration in social-impact networks? Research into general forms of collaboration have found an important connection between perceptions of fairness and willingness to invest one's time and identity into a collaborative effort. This question explores whether the research holds up in the specific context of social-impact networks. Result: The data neither support nor upend previous findings. There is such high overall perceptions of both fairness and collaboration that the data can not be differentiated sufficiently to determine a link. 

4.      Is there a perceived relationship between good collaboration and outcomes? As with Question 3, this tests whether the link between collaboration and outcomes identified by other research is perceived by stakeholders of social-impact networks to be true. Result: Nearly 90% of those taking the survey at least agree more than disagree that their collaboration improves outcomes, and 66% strongly agree or agree.

5.      What, if any, are some feasible and desirable forms that improvements to fairness in GSINs might take? This is where you come in. My first recommendation is "if it ain't broke, don't fix it." But do social-impact network practitioners want some practical tools for figuring out if things are broke or not? In my research I've collected a lot more nuanced data than what I've reported in top-line takeaways or what can even be written up in longer papers. I suspect the same is true for many of you, who aren't publishing at all. 

I would be interested in developing a bank of survey questions that networks can use and offering the baseline data I already have, in exchange for practitioners contributing to the data so that it can become more meaningful over time. For example, let's say you want to use the Process Quality Scale that measures fairness and authenticity. I currently have baseline data for that from my research, which will be helpful for you as you interpret results. But your results would be used to improve the existing data, as well, so that a more meaningful baseline is created for the next person to use that tool. We could also contribute notes on interventions we've taken, and what the results have been. This would allow us to learn from each other, while still being grounded in what we are seeing in our own spaces.

If you are interested in learning more or contributing your ideas, please fill out this very short questionnaire. You can also comment on this blog.



Sunday, November 11, 2018

Strings Being Pulled and Other Challenges

My previous posts have talked about how different demographics seem to experience fairness and collaboration differently in their social-impact networks. According to my survey, network stakeholders in general do experience their networks as fair and collaborative spaces, but a few areas for improvement have emerged.

In my survey, I asked respondents to rate their level of agreement with various statements that have to do with whether processes for making decisions and allocating resources are fair. The tool I used for this is the Process Quality Scale, which has 15 such statements and a validated scoring method. Feel free to reach out for more information about my methods and interpretation, bu for the sake of this blog, I'll cut to the chase. One last word before I do: don't forget that most respondents actually do experience their networks as fair (at least as measured by the PQS), so what's written below is in the interest of continuous improvement, and not meant as an indictment.

Strings Pulled from the Outside and Decisions Made in Advance

About 35% of the 85 respondents had some level of agreement with the statement that "often decisions are made in advance and simply confirmed by the process."  Meanwhile, about 30% had some level of agreement that "strings are being pulled from the outside, which influence important decisions."

A large minority of respondents felt strings are often pulled from the outside.


Some People's Merits Are Taken for Granted

When presented with the statement that "some people’s “merits” are taken for granted while other people are asked to justify themselves," roughly 30% of respondents had some level of agreement, though only 10% stated they fully agreed.

The idea that some people's merits are taken for granted while others have to justify themselves reminded me of George Orwell's Animal Farm, where the animals overthrew their human overlords, only to have new inequalities emerge.

Gender Differences

As I mentioned in my posts with more detailed survey results on fairness, men tend to feel more strongly than women do that processes are fair in their networks. However, it's not clear that this difference strongly impacts on personal decisions to collaborate actively in a network. For example, 90% of the 41 women who responded to a question about whether they actively participate in their network said that they did. Only 78% of the 28 men who responded that question said the same.  More on this in future posts.

What to Do about It?

I recently had the opportunity to connect in-person with leaders and advisers for five different networks. We sat down together for a bit over an hour to talk about how to interpret these results and what might be done to improve things. Stay tuned for another blog post on this topic mid-week! 

Sunday, November 4, 2018

Who feels the collaboration?

In my last post, I shared the subset of survey results concerning fairness in collaborative networks. In this post, I share some preliminary findings regarding collaboration: whether stakeholders feel its happening, whether they think it improves outcomes, whether they feel the network shares a common goal, and whether they themselves feel they participate in the network.

More good news

As with perceptions of fairness, the overwhelming majority of respondents to the survey feel their networks are collaborative spaces, that the collaboration makes them more effective, and that everyone is working toward a shared goal. The statements presented to respondents were adapted from ones administered to the RE-AMP Network by Peter Plastrik and Chinwe Onyeagoro as part of an outside evaluation a few years back. Because I wanted to be able to tie people's self-assessed level of participation to other aspects of the survey, I added a question asking people if they actively participate. Most respondents do see themselves as active participants in the network. 

The bar chart below shows the percent of respondents from each demographic that said they "strongly agree" with the statement about collaboration, later I'll show you a broader spectrum of data. 

Percent of respondents who strongly agree with statements about collaboration in their network.
*While we generally want a sample size of at least 30 for statistical significance, only 28 men responded to the bottom three statements, and only 27 responded to the top statement. The sample size for those in a racial minority is 12, and for men who are a gender minority, the sample size is 10.


As you can see (or maybe you can't if you are looking at this on a mobile device), about a third of all respondents strongly agree that the network collaboration helps them be more effective. Slightly more than that strongly agree that they have a shared purpose. More still strongly agree that they have a highly collaborative experience in the network, though some interesting differences between demographics appear.

A gender distinction

As with perceptions of fairness, the subgroups of respondents that seem to be the most different are men and women. I don't want to make too much of this, since we can see that both groups seem to view their networks as collaborative places, but the subtle difference is quite interesting. Take, for example, the statement about the network being a highly collaborative place. 

In the bar chart below, we see that nearly 60% of men strongly agree with the statement, compared to just under 40% of women. However, 100% of women at least agree more than disagree with the statement, whereas only about 85% of men have some level of agreement. Although this particular question is the one where this pattern shows up most strongly, it holds for all the responses: men are more likely than women to either strongly agree or to disagree with the statement.


This is where the small sample size really becomes a challenge. When we are talking about only 28 men altogether, then it's hard to say if this is just a data blip, or if it is pointing us toward something useful. In the coming weeks I will be hosting some group conversations with network practitioners to try to understand if it's productive to dig into these differences. I will also be looking at the open-ended responses from the survey for clues to the direction members want their networks to take. Do feel free to reach out to me with your own interpretation as well.

Sunday, October 28, 2018

Early Results of Process Quality Survey in Collaborative Networks

This summer I put out a call for people engaged in collaborative networks to take a survey about their experiences in those networks. Thank you to everyone who responded and who encouraged others to do the same. Although I may not be able to use the results in this round of research, you can still take an abbreviated version of  the survey if you like.

Below you can see some of the demographics of who responded to the survey. You will see that the answers don't always add up perfectly, that's because most of the questions did not require a response and some people chose not to respond.
Survey respondents' demographic info

You probably notice a lot of the same things I did when looking at the demographics. For example, the high number of responses from women (which is particularly noteworthy since nearly half of respondents said men and women were present in roughly equal numbers in their network), and the small number of respondents who are part of a racial minority in their network (of those 12 only 2 identified as white). Also, if you are like me, you are developing theories about why the demographics shook out as they did.

In this post, I will share some preliminary analysis of the quantitative data that was gathered regarding fairness. My next post will examine the quantitative data on collaboration. Future posts will explore open-ended responses, as well as dig more deeply into some of the interesting patterns that emerge.

Some words of caution: In general, a sample size of 30 or more is considered statistically valid. However, as you can see, we only barely have a statistically significant sample size of men and we do not have a statistically significant sample size for people who are in a racial minority in their network, nor for those in the governmental sectors. Moreover, those of us working in this field know that sometimes the term "network" is interpreted very differently by different people. I tried to filter for this by asking people entering the survey if they met certain criteria, but it's still hard to say how much "apple-to-apple" comparisons are really embedded in these results.

Lastly, because I want this blog to be readable, I am mainly avoiding detailed discussion of methodology, but please do reach out if you want more information about how I arrived at the interpretations below.

So, if you are willing to hold all these caveats and treat these results lightly, let's dig in!

Procedural Fairness 

The survey used the Process Quality Scale (PQS) developed by Darrin Hicks and colleagues to measure perceptions of how fair a process is (as opposed to how fair the outcomes of a process are, see my earlier blog post). This 15-question questionnaire deals with whether people see the process as authentic, revisable, and consistently applied. In the survey, I asked people to base their responses on their impressions about processes in their network overall, as opposed to one particular process.

The good news

In general, people find their networks to be engaged in processes that are mainly fair. At least, insofar as the PQS can measure such a thing. In 8 of the questions, the most common response was that processes are more fair than not, and in the other 7 questions, the most common response was that the processes are fair. 

I went into this analysis curious to see if a strong difference would be noted between members and staff/consultants. However, the results were virtually indistinguishable. Members seems to think their networks are a pretty fair place to be, and so do the staff and consultants. 

The interesting news

You don't have to be a novelist to know that readers don't want to hear about all the happy things, readers want the dirt! Not you, of course, but perhaps the other readers of this blog. Well, it turns out that when we break the data down, we see that the intensity with which people perceive procedural fairness varies significantly. 

I was surprised to see that the group that seems to have the strongest sense of fairness in their networks was men who are in a network that is majority-woman. The second strongest perceptions of fairness come from those who are in a racial minority in their network, although this group had a wider range of opinion than the men who were in gender minorities. 


Bubble chart showing the modal average responses to the PQS, broken down by subgroups


Women and men are the two groups that have the most pronounced differences in interpretations of the level of fairness. In 13 of the questions on the PQS, the most common response from men was that the process was fair. For women, they said the process was fair in response to only 3 questions, and for 11 others they said it was only more fair than not. To one question, the most common response from women was that things were more unfair than not (the question was about whether everyone has equal opportunity to influence decisions). 

I suspect the relatively strong feelings of fairness among racial minorities may have something to do with the fact that 75% of that group are men, and of those about half are in a network comprised largely of women. A larger sample size might reveal significantly different answers. As it is, respondents who were a racial minority were still the only group to have more than one question generate a response that things were more unfair than not (one question was about giving some more than they deserve while shortchanging others, another was about some people's merits being taken for granted while others have to justify themselves). 

As you can see, there is a lot to unpack. So stay tuned, and, if you want to see all the questions, feel free to go to the abbreviated version of the survey here

Interpretation

As I move forward in the research, I will be examining the other data and engaging in interviews to attempt to understand the differences in particular between men's and women's perceptions. Is it possible that women are more accustomed to collaboration than men, and thus situations that seem incredibly collaborative to men are merely acceptably collaborative to women? Could men be obliviously on the receiving end of deferential treatment? Could women be consulted less, or expected to pick up more of the procedural work? Is the fact that it was more challenging to get men to take the survey somehow related?  Most importantly, is it a problem that there is a discrepancy in the answers of men and women, and can this data help us make networks better? Drop me a line if you want to be part of the conversation!





Tuesday, August 28, 2018

Procedural and Distributive Fairness

In this post, I return to the topic of Fairness Heuristic Theory, and make a key distinction between two ways fairness can be experienced: in an outcome or in a process.

What's a Heuristic and Why Should I Care?

As it's used here, think of a heuristic as a rule of thumb, one that allows you make a decision without having to think through every angle. You know it's imperfect, but you use it to help you make sense of what's going on and make judgements fairly quickly. I once met a woman who would only date a man who had an iPhone. To her, Androids represented slovenliness, lack of ambition, and a willingness to inconvenience others. By focusing on iPhone users, she was able to give her attention to just a few men, one of whom she fell head over heels for and married.

In the case of Fairness Heuristic Theory (FHT), an individual's perceptions of fairness are used as a rule of thumb to guide their decisions about whether to collaborate with others. Unlike the case of the iPhone-phile above, the rule of thumb operates in the background, not usually something that is explicitly thought about. To understand why it matters for networks, we need to understand the difference between procedural and distributive fairness (also called procedural or distributive justice). 

Procedural and Distributive Fairness

When we think about whether things are fair, many of us think right away about outcomes. But research by E. Allan Lind, Kees Van den Bos and others indicates it's not usually that type of fairness that influences our decisions about whether to join with others. Rather, we look for clues about whether we can trust the process, how we can expect to be treated, and what our standing in the group will be.

Image result for fred wright cartoon screw
This classic union cartoon from the second half of the last century demonstrates the concept of distributive (un)fairness.

Distributive fairness is about who gets what (funding, credit, blame, promotions, exposure, etc.). Procedural fairness is about how the getting gets done.

Whereas this one has to do with procedural (un)fairness

Research by D. Hicks and C. Larson indicate that a process is generally perceived as fair when it is seen as authentic, revisable, inclusive, and transparent. (Want to share how your network measures up? Take my research survey.)

Which Is More Important?

Neither type of fairness is intrinsically more important. But there is reason that our perceptions of procedural fairness generally influence willingness to join collaborations more than distributive fairness: namely, we are usually exposed to process before outcome. 

Research by Lind and colleagues has shown that we tend to form opinions about fairness very quickly, and that, once formed, those opinions are pretty stable. In other words, if our first exposure to a network is fair, then things that happen later in that network will tend to be interpreted through the understanding that the network is basically fair. Things that might otherwise be seen as unfair will either be explained away or be seen as an aberration. Conversely, if our first experience is perceived as unfair, then later experiences will be interpreted with the understanding the network is a basically unfair place. 

As a general rule, potential network members are presented first with options for participation (i.e. processes for collaborating to share knowledge, co-create strategy, experiment together, etc.). Rarely (but not never) a member is invited to a network by being presented with an outcome, such as a portion of funding or some other resource.

Because the first exposure to a network is usually a process, procedural fairness is usually a stronger force in a member's overall opinion about it. At least, if FHT holds true.

Does It Actually Matter?

I've been interviewing thought leaders in the world of social-impact networks. Some of them feel the issue of fairness is an important one that affects their networks, others feel members typically are not affected by a desire for fair treatment either in terms of process or outcome, but rather join networks for altruistic reasons.  

What do you think? Leave me a comment or take my research survey






Sunday, July 15, 2018

Commodities of Power in a Collaborative Network

In my last post, I invited readers to take a survey about their experiences with collaboration in their networks. The survey is still open and I encourage you to take it if you haven't yet! In this post, I discuss what I am learning about "commodities of power" in collaborative networks. A commodity of power is something that demonstrates prestige and standing.

In his book Learning for Action, Peter Checkland gives an example of an organization where people were divided into "KT" and "NKT." The "KT" people "Knew Tom," who was the founder. The "NKT" people "Never Knew Tom" and were relegated to a lesser role. Having known Tom was a commodity of power.

I asked a group of network practitioners the following question: If I were an alien from outer space who didn't know anything about humans, what is the decoder ring you could give me so that I know who has power?
06305827_take_me_to_your_leader_design_xlarge
If I were an alien, how could I know who has power and who doesn't?
  

Right away, everyone listed formal roles. Someone on the steering committee has more power than a member not on the steering committee, for example. As the conversation progressed, people were able to give my imaginary alien the keys to decoding much more subtle ways that power is understood in their networks. Examples of commodities of power included:

  • Convening a conversation or meeting
  • Other members do the work of your vision
  • Having exposure, such as by speaking or presenting
  • Control over the budget
  • Access to decision-makers
  • Being eloquent
  • Having a firm commitment to one's values
  • Conversely, being in a position to accuse someone of not living up to shared values
  • Being tattled to (in other words, the person that someone complains to in order to correct someone else's behavior)
The point of learning about commodities of power is that it helps one frame interventions. If recommendations are going to run counter to prevailing forces, it's best to understand that up front. 

What do you think? Leave a comment sharing some of commodities of power at work in your network, or take the research survey that will ask you about this and other factors relevant to collaboration.