Monday, July 8, 2019

Webinar on Research Findings

I wanted to let you know that my research into the role of fairness in collaboration is now complete.  I will be giving a webinar summarizing the findings, along with an introduction by Jennie Curtis of the Garfield Foundation. The webinar will be held on July 25 at 11:00 Central Daylight Time. You can register here. And please feel free to invite others. 

To complete the research, I surveyed 85 members from 57 unique networks, engaged in facilitated dialogue with 20 network practitioners, and interviewed 12 thought leaders. Some of the most interesting findings had to do with how men and women experience different levels of fairness, and with concerns about authentic decision-making in networks. You can learn more about my research through previous blog posts I kept along the way. I'm excited to share what I've found and hear your own interpretation of the data!

Monday, January 21, 2019

If It Ain't Broke...But How Do We Know?

One expression I heard a lot growing up was, "if it ain't broke, don't fix it." So far in my research into fairness and collaboration in social-impact networks, this seems like good advice. Overall, this first foray into scoping the world of such networks indicates that, despite some weak spots,  stakeholders largely believe their networks to be fair, collaborative, and effective. But, the data represent a generalized survey among, rather than within, networks. So how can any given network know if things are broke? In this post I  ask for readers' opinions about an idea that's emerging. (Spoiler, there's a 3-question survey you can take to weigh in.)

Toward a Shared, but Context-Derived Evaluation Platform...or "If it's broke, try using what's handy first"

My spouse is the king of reusing materials. Nothing gives him greater pleasure than finding some old, sturdy, piece of metal that can be repurposed to fix something broken in our house. In the world of generative social-impact networks, we've got a lot of useful knowledge than can also be appropriately repurposed by those with sufficient experience. I'd like to offer up some of what I've learned, see especially the 5th point below.


The survey I circulated and which many of you took was part of some larger research to inform the following research questions:

1.      What are the perceptions of fairness among those practicing a generative social-impact network approach? Establishing a baseline for such perceptions allows for then suggesting ways to improve fairness. Result: Perceptions of fairness are high among those stakeholders choosing to take part in the survey.

2.      What accounts for differences in perceptions of fairness? Determining if some networks or some types of members experience higher or lower levels of fairness provides a basis for suggesting best practices. Result: The largest difference in perceptions is that men tend to feel far more strongly than women do that their networks are fair and collaborative places.

3.      Is there a perceived relationship between perceptions of fairness and collaboration in social-impact networks? Research into general forms of collaboration have found an important connection between perceptions of fairness and willingness to invest one's time and identity into a collaborative effort. This question explores whether the research holds up in the specific context of social-impact networks. Result: The data neither support nor upend previous findings. There is such high overall perceptions of both fairness and collaboration that the data can not be differentiated sufficiently to determine a link. 

4.      Is there a perceived relationship between good collaboration and outcomes? As with Question 3, this tests whether the link between collaboration and outcomes identified by other research is perceived by stakeholders of social-impact networks to be true. Result: Nearly 90% of those taking the survey at least agree more than disagree that their collaboration improves outcomes, and 66% strongly agree or agree.

5.      What, if any, are some feasible and desirable forms that improvements to fairness in GSINs might take? This is where you come in. My first recommendation is "if it ain't broke, don't fix it." But do social-impact network practitioners want some practical tools for figuring out if things are broke or not? In my research I've collected a lot more nuanced data than what I've reported in top-line takeaways or what can even be written up in longer papers. I suspect the same is true for many of you, who aren't publishing at all. 

I would be interested in developing a bank of survey questions that networks can use and offering the baseline data I already have, in exchange for practitioners contributing to the data so that it can become more meaningful over time. For example, let's say you want to use the Process Quality Scale that measures fairness and authenticity. I currently have baseline data for that from my research, which will be helpful for you as you interpret results. But your results would be used to improve the existing data, as well, so that a more meaningful baseline is created for the next person to use that tool. We could also contribute notes on interventions we've taken, and what the results have been. This would allow us to learn from each other, while still being grounded in what we are seeing in our own spaces.

If you are interested in learning more or contributing your ideas, please fill out this very short questionnaire. You can also comment on this blog.