語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
The practice of crowdsourcing /
~
Alonso, Omar,
The practice of crowdsourcing /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
The practice of crowdsourcing // Omar Alonso.
作者:
Alonso, Omar,
面頁冊數:
1 PDF (xix, 129 pages) :illustrations (some color). :
附註:
Part of: Synthesis digital library of engineering and computer science.
標題:
Human computation. -
電子資源:
https://ieeexplore.ieee.org/servlet/opac?bknumber=8726280
電子資源:
https://doi.org/10.2200/S00904ED1V01Y201903ICR066
ISBN:
9781681735245
The practice of crowdsourcing /
Alonso, Omar,
The practice of crowdsourcing /
Omar Alonso. - 1 PDF (xix, 129 pages) :illustrations (some color). - Synthesis lectures on information concepts, retrieval, and services,#661947-9468;. - Synthesis digital library of engineering and computer science..
Part of: Synthesis digital library of engineering and computer science.
Includes bibliographical references (pages 105-127).
1. Introduction -- 1.1. Human computers -- 1.2. Basic concepts -- 1.3. Examples -- 1.4. Some generic observations -- 1.5. A note on platforms -- 1.6. The importance of labels -- 1.7. Scope and structure
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
Compendex
Many data-intensive applications that use machine learning or artificial intelligence techniques depend on humans providing the initial dataset, enabling algorithms to process the rest or for other humans to evaluate the performance of such algorithms. Not only can labeled data for training and evaluation be collected faster, cheaper, and easier than ever before, but we now see the emergence of hybrid human-machine software that combines computations performed by humans and machines in conjunction. There are, however, real-world practical issues with the adoption of human computation and crowdsourcing. Building systems and data processing pipelines that require crowd computing remains difficult. In this book, we present practical considerations for designing and implementing tasks that require the use of humans and machines in combination with the goal of producing high-quality labels.
Mode of access: World Wide Web.
ISBN: 9781681735245
Standard No.: 10.2200/S00904ED1V01Y201903ICR066doiSubjects--Topical Terms:
1000057
Human computation.
Subjects--Index Terms:
human computation
LC Class. No.: QA76.9.H84 / A467 2019eb
Dewey Class. No.: 004.019
The practice of crowdsourcing /
LDR
:05376nam a2200673 i 4500
001
959764
003
IEEE
005
20190606190106.0
006
m eo d
007
cr cn |||m|||a
008
201209s2019 caua fob 000 0 eng d
020
$a
9781681735245
$q
electronic
020
$z
9781681735252
$q
hardcover
020
$z
9781681735238
$q
paperback
024
7
$a
10.2200/S00904ED1V01Y201903ICR066
$2
doi
035
$a
(CaBNVSL)thg00979013
035
$a
(OCoLC)1103600437
035
$a
8726280
040
$a
CaBNVSL
$b
eng
$e
rda
$c
CaBNVSL
$d
CaBNVSL
050
4
$a
QA76.9.H84
$b
A467 2019eb
082
0 4
$a
004.019
$2
23
100
1
$a
Alonso, Omar,
$e
author.
$3
1253105
245
1 4
$a
The practice of crowdsourcing /
$c
Omar Alonso.
264
1
$a
[San Rafael, California] :
$b
Morgan & Claypool,
$c
[2019]
300
$a
1 PDF (xix, 129 pages) :
$b
illustrations (some color).
336
$a
text
$2
rdacontent
337
$a
electronic
$2
isbdmedia
338
$a
online resource
$2
rdacarrier
490
1
$a
Synthesis lectures on information concepts, retrieval, and services,
$x
1947-9468;
$v
#66
500
$a
Part of: Synthesis digital library of engineering and computer science.
504
$a
Includes bibliographical references (pages 105-127).
505
0
$a
1. Introduction -- 1.1. Human computers -- 1.2. Basic concepts -- 1.3. Examples -- 1.4. Some generic observations -- 1.5. A note on platforms -- 1.6. The importance of labels -- 1.7. Scope and structure
505
8
$a
2. Designing and developing microtasks -- 2.1. Microtask development flow -- 2.2. Programming hits -- 2.3. Asking questions -- 2.4. Collecting responses -- 2.5. Interface design -- 2.6. Cognitive biases and effects -- 2.7. Content aspects -- 2.8. Task clarity -- 2.9. Task complexity -- 2.10. Sensitive data -- 2.11. Examples -- 2.12. Summary
505
8
$a
3. Quality assurance -- 3.1. Quality framework -- 3.2. Quality control overview -- 3.3. Recommendations from platforms -- 3.4. Worker qualification -- 3.5. Reliability and validity -- 3.6. Hit debugging -- 3.7. Summary
505
8
$a
4. Algorithms and techniques for quality control -- 4.1. Framework -- 4.2. Voting -- 4.3. Attention monitoring -- 4.4. Honey pots -- 4.5. Workers reviewing work -- 4.6. Justification -- 4.7. Aggregation methods -- 4.8. Behavioral data -- 4.9. Expertise and routing -- 4.10. Summary
505
8
$a
5. The human side of human computation -- 5.1. Overview -- 5.2. Demographics -- 5.3. Incentives -- 5.4. Worker experience -- 5.5. Worker feedback -- 5.6. Legal and ethics -- 5.7. Summary
505
8
$a
6. Putting all things together -- 6.1. The state of the practice -- 6.2. Wetware programming -- 6.3. Testing and debugging -- 6.4. Work quality control -- 6.5. Managing construction -- 6.6. Operational considerations -- 6.7. Summary of practices -- 6.8. Summary
505
8
$a
7. Systems and data pipelines -- 7.1. Evaluation -- 7.2. Machine translation -- 7.3. Handwritting recognition and transcription -- 7.4. Taxonomy creation -- 7.5. Data analysis -- 7.6. News near-duplicate detection -- 7.7. Entity resolution -- 7.8. Classification -- 7.9. Image and speech -- 7.10. Information extraction -- 7.11. RABJ -- 7.12. Workflows -- 7.13. Summary
505
8
$a
8. Looking ahead -- 8.1. Crowds and social networks -- 8.2. Interactive and real-time crowdsourcing -- 8.3. Programming languages -- 8.4. Databases and crowd-powered algorithms -- 8.5. Fairness, bias, and reproducibility -- 8.6. An incomplete list of requirements for infrastructure -- 8.7. Summary.
506
$a
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
510
0
$a
Compendex
510
0
$a
INSPEC
510
0
$a
Google scholar
510
0
$a
Google book search
520
3
$a
Many data-intensive applications that use machine learning or artificial intelligence techniques depend on humans providing the initial dataset, enabling algorithms to process the rest or for other humans to evaluate the performance of such algorithms. Not only can labeled data for training and evaluation be collected faster, cheaper, and easier than ever before, but we now see the emergence of hybrid human-machine software that combines computations performed by humans and machines in conjunction. There are, however, real-world practical issues with the adoption of human computation and crowdsourcing. Building systems and data processing pipelines that require crowd computing remains difficult. In this book, we present practical considerations for designing and implementing tasks that require the use of humans and machines in combination with the goal of producing high-quality labels.
530
$a
Also available in print.
538
$a
Mode of access: World Wide Web.
538
$a
System requirements: Adobe Acrobat Reader.
588
$a
Title from PDF title page (viewed on June 4, 2019).
650
0
$a
Human computation.
$3
1000057
653
$a
human computation
653
$a
crowdsourcing
653
$a
crowd computing
653
$a
labeling
653
$a
ground truth
653
$a
data pipelines
653
$a
wetware programming
653
$a
hybrid human-machine computation
653
$a
human-in-the-loop
776
0 8
$i
Print version:
$z
9781681735238
$z
9781681735252
830
0
$a
Synthesis digital library of engineering and computer science.
$3
598254
830
0
$a
Synthesis lectures on information concepts, retrieval, and services ;
$v
#22.
$3
866817
856
4 2
$3
Abstract with links to resource
$u
https://ieeexplore.ieee.org/servlet/opac?bknumber=8726280
856
4 0
$3
Abstract with links to full text
$u
https://doi.org/10.2200/S00904ED1V01Y201903ICR066
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入