Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

As an intermediary, how can you constantly find new and interested solvers around the world (i.e. people ready to propose solutions to technological problems)? You need efficient incentives to attract them on your platform, to make them subscribe, to engage them in solving problems and to release intellectual property (IP). Registration has a direct negative impact on the number of solvers, although it may be, in principle, a nice way to select supposedly motivated solvers.

Few experts register on crowdsourcing platforms

How many experts go and register on crowdsourcing platforms? 10,000? 100,000? Most famous platforms hardly reach 300,000 solvers, which is far less than the tens of millions of experts in the world. In addition, figures claimed by many companies are unclear: how can the claims of newcomers be checked? How many subscribers really want to engage in problem solving? These systems lead to a misleading competition to get the “biggest” pool of solvers, disregarding truth[1] and quality[2].

The large majority of registered solvers does not participate

Self-registration is not the right paradigm to address global expertise. Many studies show that only 1% of subscribers are active on such platforms. The largest existing Open Innovation communities have only a few thousands of active usersiv.

Solvers are not experts

Crowdsourcing means that anybody can be a solver. This is a fashionable and demagogic argument to attract a large quantity of random solvers on a platform (the large number of subscribers being an argument to attract client companies and supposedly launch the platform). But among registered solvers, not all of them are experts. Some platforms actually use this as an advantage and claim that what matters is the solutions or the ideas brought by the solvers, and not their background. This is true to some extend (user-driven innovation etc.), but idealistic for highly critical problems (as seen on most platforms) requiring a strong expertise.


 


[1] Some claims of recent Open Innovation platforms can be easily refuted by the use of online website statistics analyzers, showing that there can be a factor up to 30 between the reality and the claims.

[2] The same remark applies for the pool of problems (especially regarding their formulation).

The present article is after the chapter we wrote in the book « A Guide to Open Innovation and Crowdsourcing: A Compendium of Best Practice, Advice and Case Studies from Leading Thinkers, Commentators and Practitioners« .