API数据预览:曲老师
API数据格式
API地址:
http://human-ppt-api.jeemoo.net/ppt/api/ppt-data/222
{
"title": "曲老师",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100457_098001.mp4",
"pptBackgroundUrl": "",
"pptImgList": [
{
"pageId": 1804,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100558_158514.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100543_143276.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101254_574374.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "大家好!",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "今天我们将介绍第二章的内容感知机Perceptron。",
"startTime": "00:00:01",
"endTime": "00:00:05"
},
{
"text": "重点包括感知机的基本原理、工作方式以及在机器学习中的应用。",
"startTime": "00:00:05",
"endTime": "00:00:11"
},
{
"text": "通过本次分享,您将了解感知机如何帮助我们解决分类问题,并掌握其背后的数学模型。",
"startTime": "00:00:11",
"endTime": "00:00:19"
},
{
"text": "让我们一起深入了解感知机的世界吧!",
"startTime": "00:00:19",
"endTime": "00:00:22"
}
]
},
{
"pageId": 1805,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100606_166048.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100543_143549.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101256_576298.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "线性分类器,简单来说,就是用一条直线来区分两类样本。",
"startTime": "00:00:00",
"endTime": "00:00:05"
},
{
"text": "比如图中,X和O分别代表两类样本,中间的直线就是一个分类函数,它能将这两类样本完全分开。",
"startTime": "00:00:05",
"endTime": "00:00:14"
}
]
},
{
"pageId": 1806,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100616_176846.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100543_143863.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101257_577758.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "线性函数,简单来说,就是在不同维度的空间里,它有不同的表现形式。",
"startTime": "00:00:00",
"endTime": "00:00:06"
},
{
"text": "在1维空间里,它就是一个点;在2维空间里,它是一条直线;",
"startTime": "00:00:06",
"endTime": "00:00:12"
},
{
"text": "在3维空间里,它是一个平面。",
"startTime": "00:00:12",
"endTime": "00:00:14"
},
{
"text": "如果忽略维度,这种线性函数还有一个统一的名字超平面HyperPlane。",
"startTime": "00:00:14",
"endTime": "00:00:21"
}
]
},
{
"pageId": 1807,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100629_189092.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100544_144117.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101259_579239.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "线性函数分类问题。",
"startTime": "00:00:00",
"endTime": "00:00:02"
},
{
"text": "比如我们有一个线性函数,我们可以取阈值为0。",
"startTime": "00:00:02",
"endTime": "00:00:06"
},
{
"text": "这样当有一个样本需要判别时,我们就看的值。",
"startTime": "00:00:06",
"endTime": "00:00:10"
},
{
"text": "若大于0,就判别为类别O;若小于0,则判别为类别X。",
"startTime": "00:00:10",
"endTime": "00:00:16"
},
{
"text": "这就是分类面。",
"startTime": "00:00:16",
"endTime": "00:00:18"
}
]
},
{
"pageId": 1808,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100641_201823.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100544_144375.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101300_580869.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机,1957年由Rosenblatt提出的神经网络基础。",
"startTime": "00:00:00",
"endTime": "00:00:04"
},
{
"text": "输入是实例的特征向量,输出是类别,取1和1。",
"startTime": "00:00:04",
"endTime": "00:00:10"
},
{
"text": "它将实例划分为正负两类,属于判别模型。",
"startTime": "00:00:10",
"endTime": "00:00:14"
},
{
"text": "基于误分类损失函数进行极小化,学习算法简单。",
"startTime": "00:00:14",
"endTime": "00:00:18"
},
{
"text": "利用梯度下降法实现优点,分为原始形式和对偶形式。",
"startTime": "00:00:18",
"endTime": "00:00:23"
}
]
},
{
"pageId": 1809,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100658_218054.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100544_144610.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101302_582639.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "感知机模型定义:",
"startTime": "00:00:01",
"endTime": "00:00:03"
},
{
"text": "假设输入空间特征空间X属于Rn,输出空间是11。",
"startTime": "00:00:03",
"endTime": "00:00:08"
},
{
"text": "输入x表示实例的特征向量,对应于输入空间特征空间的点,输出y表示实例的类别,由输入空间到输出空间的函数fxsignwxb,称为感知机。",
"startTime": "00:00:08",
"endTime": "00:00:21"
},
{
"text": "模型参数:权值向量w,偏置b。",
"startTime": "00:00:21",
"endTime": "00:00:25"
},
{
"text": "符号函数signx:当x0时为1,当x0时为1。",
"startTime": "00:00:25",
"endTime": "00:00:30"
}
]
},
{
"pageId": 1810,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100713_233140.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100544_144869.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101304_584511.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机模型,简单来说,就是一种用于分类的机器学习模型。",
"startTime": "00:00:00",
"endTime": "00:00:05"
},
{
"text": "它通过线性方程wcdotxb0来定义超平面,其中w是法向量,b是截距。",
"startTime": "00:00:05",
"endTime": "00:00:12"
},
{
"text": "这个方程将数据点分为两类:正类和负类。",
"startTime": "00:00:12",
"endTime": "00:00:16"
},
{
"text": "在图中,我们看到一个二维空间中的超平面,它将数据点分为两部分。",
"startTime": "00:00:16",
"endTime": "00:00:23"
},
{
"text": "在这个例子中,正类用圆圈表示,负类用十字表示。",
"startTime": "00:00:23",
"endTime": "00:00:28"
},
{
"text": "超平面wcdotxb0将这两类数据分开。",
"startTime": "00:00:28",
"endTime": "00:00:31"
}
]
},
{
"pageId": 1811,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100731_251718.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100545_145125.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101306_586383.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "如何定义损失函数?",
"startTime": "00:00:00",
"endTime": "00:00:02"
},
{
"text": "自然选择:误分类点的数目。",
"startTime": "00:00:02",
"endTime": "00:00:04"
},
{
"text": "但损失函数不是wb连续可导,不宜优化。",
"startTime": "00:00:04",
"endTime": "00:00:09"
},
{
"text": "另一选择:误分类点到超平面的总距离。",
"startTime": "00:00:09",
"endTime": "00:00:13"
},
{
"text": "距离:1wwx0b。",
"startTime": "00:00:13",
"endTime": "00:00:16"
},
{
"text": "误分类点:yiwxib0。",
"startTime": "00:00:16",
"endTime": "00:00:18"
},
{
"text": "误分类点距离:1wyiwxib。",
"startTime": "00:00:18",
"endTime": "00:00:21"
},
{
"text": "总距离:1wyiwxib。",
"startTime": "00:00:21",
"endTime": "00:00:23"
}
]
},
{
"pageId": 1812,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100747_267438.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100545_145352.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101308_588295.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "感知机学习策略的核心观点是损失函数的定义和应用。",
"startTime": "00:00:01",
"endTime": "00:00:06"
},
{
"text": "损失函数是用来衡量模型预测结果与实际结果之间的差距的指标。",
"startTime": "00:00:06",
"endTime": "00:00:12"
},
{
"text": "在这个公式中,Lwb表示损失函数,w和b分别是权重和偏置。",
"startTime": "00:00:12",
"endTime": "00:00:18"
},
{
"text": "M为误分类点的集合,即那些被模型错误分类的数据点。",
"startTime": "00:00:18",
"endTime": "00:00:24"
},
{
"text": "这个损失函数告诉我们,对于每个误分类点,我们需要调整权重和偏置,以减少损失值。",
"startTime": "00:00:24",
"endTime": "00:00:31"
},
{
"text": "通过不断迭代优化,我们可以找到一个最优的权重和偏置,使得模型能够正确分类所有的数据点。",
"startTime": "00:00:31",
"endTime": "00:00:39"
}
]
},
{
"pageId": 1813,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100810_290959.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100545_145603.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101310_590886.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "感知机学习算法的核心观点是求解最优化问题。",
"startTime": "00:00:01",
"endTime": "00:00:05"
},
{
"text": "具体来说,我们通过最小化损失函数Lwb来找到最优的超平面w和b。",
"startTime": "00:00:05",
"endTime": "00:00:12"
},
{
"text": "支撑点包括随机梯度下降法,它通过不断迭代更新w和b来最小化目标函数。",
"startTime": "00:00:12",
"endTime": "00:00:20"
},
{
"text": "损失函数L的梯度告诉我们如何调整w和b以减小损失。",
"startTime": "00:00:20",
"endTime": "00:00:26"
},
{
"text": "对于w,梯度为yixi;对于b,梯度为yi。",
"startTime": "00:00:26",
"endTime": "00:00:30"
},
{
"text": "这意味着我们需要在每个迭代中更新w和b,使其更接近最优解。",
"startTime": "00:00:30",
"endTime": "00:00:36"
},
{
"text": "选取误分类点更新公式表明,我们可以通过增加误分类点对w和b的影响来改进模型。",
"startTime": "00:00:36",
"endTime": "00:00:45"
},
{
"text": "具体地,我们将误分类点xi乘以学习率yi添加到w中,同时将b加上同样的值。",
"startTime": "00:00:45",
"endTime": "00:00:52"
},
{
"text": "这样可以逐步修正模型,直到所有数据点都正确分类。",
"startTime": "00:00:52",
"endTime": "00:00:57"
}
]
},
{
"pageId": 1814,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100825_305577.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100545_145843.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101312_592853.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机学习算法,简单来说,就是一种机器学习方法。",
"startTime": "00:00:00",
"endTime": "00:00:04"
},
{
"text": "它通过训练数据集来构建模型。",
"startTime": "00:00:04",
"endTime": "00:00:08"
},
{
"text": "输入是训练数据集和学习率,输出是感知机模型。",
"startTime": "00:00:08",
"endTime": "00:00:12"
},
{
"text": "算法步骤包括:",
"startTime": "00:00:12",
"endTime": "00:00:14"
},
{
"text": "选取初值,从训练集中选取数据,如果存在误分类点,则转至选取数据,直至训练集中没有误分类点。",
"startTime": "00:00:14",
"endTime": "00:00:24"
}
]
},
{
"pageId": 1815,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100838_318050.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100546_146087.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101314_594448.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "正例:x133Tx243T负例:",
"startTime": "00:00:01",
"endTime": "00:00:05"
},
{
"text": "x311Tw000b001迭代误分类wb0001x133T12x322T03x311T14x300T25x133T16x322T27x311T38011T3",
"startTime": "00:00:05",
"endTime": "00:00:25"
}
]
},
{
"pageId": 1816,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100848_328598.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100546_146308.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101316_596065.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机学习算法,它很神奇。",
"startTime": "00:00:00",
"endTime": "00:00:02"
},
{
"text": "如果数据线性可分,经过有限次迭代就能找到完美划分的超平面和模型。",
"startTime": "00:00:02",
"endTime": "00:00:09"
},
{
"text": "比如,训练数据集D,每个样本都有特征x和标签y,假设这些样本在单位模长的超平面上是线性可分的,那么感知机算法在迭代后会收敛,只要样本都在R范围内。",
"startTime": "00:00:09",
"endTime": "00:00:24"
}
]
},
{
"pageId": 1817,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100949_389235.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100546_146637.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101319_599766.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "1核心观点:",
"startTime": "00:00:01",
"endTime": "00:00:02"
},
{
"text": "BlockNovikoff的定理指出,如果训练数据集在单位模长的超平面下线性可分,感知器算法在迭代之后会收敛。",
"startTime": "00:00:02",
"endTime": "00:00:12"
},
{
"text": "2支撑点1:假设对所有x,成立xR。",
"startTime": "00:00:12",
"endTime": "00:00:16"
},
{
"text": "3解释:",
"startTime": "00:00:16",
"endTime": "00:00:17"
},
{
"text": "这意味着数据点都在一个圆内,且圆心到超平面的距离为1。",
"startTime": "00:00:17",
"endTime": "00:00:23"
},
{
"text": "这样,感知器算法可以找到一个合适的超平面将数据分开。",
"startTime": "00:00:23",
"endTime": "00:00:28"
},
{
"text": "4核心观点:通过迭代更新权重向量w,感知器算法能够收敛。",
"startTime": "00:00:28",
"endTime": "00:00:34"
},
{
"text": "5支撑点2:",
"startTime": "00:00:34",
"endTime": "00:00:36"
},
{
"text": "使用迭代公式wk1wkynxn,其中yn是第n个样本的标签。",
"startTime": "00:00:36",
"endTime": "00:00:43"
},
{
"text": "6解释:",
"startTime": "00:00:43",
"endTime": "00:00:44"
},
{
"text": "每次迭代,算法都会根据错误分类的样本进行调整,逐步逼近最优解。",
"startTime": "00:00:44",
"endTime": "00:00:50"
},
{
"text": "7核心观点:",
"startTime": "00:00:50",
"endTime": "00:00:52"
},
{
"text": "经过k次迭代后,算法的性能指标如误差会达到某个上限。",
"startTime": "00:00:52",
"endTime": "00:00:57"
},
{
"text": "8支撑点3:利用迭代公式和数据特性,证明了算法的收敛性。",
"startTime": "00:00:57",
"endTime": "00:01:03"
},
{
"text": "9解释:",
"startTime": "00:01:03",
"endTime": "00:01:04"
},
{
"text": "通过数学推导,证明了在有限次数的迭代后,算法的误差不会超过某个值,从而保证了算法的收敛性。",
"startTime": "00:01:04",
"endTime": "00:01:13"
},
{
"text": "BlockNovikoff的定理指出,如果训练数据集在单位模长的超平面下线性可分,感知器算法在迭代之后会收敛。",
"startTime": "00:01:13",
"endTime": "00:01:22"
},
{
"text": "假设对所有x,成立xR。",
"startTime": "00:01:22",
"endTime": "00:01:25"
},
{
"text": "通过迭代更新权重向量w,感知器算法能够收敛。",
"startTime": "00:01:25",
"endTime": "00:01:30"
},
{
"text": "使用迭代公式wk1wkynxn,其中yn是第n个样本的标签。",
"startTime": "00:01:30",
"endTime": "00:01:37"
},
{
"text": "经过k次迭代后,算法的性能指标如误差会达到某个上限。",
"startTime": "00:01:37",
"endTime": "00:01:43"
},
{
"text": "利用迭代公式和数据特性,证明了算法的收敛性。",
"startTime": "00:01:43",
"endTime": "00:01:47"
},
{
"text": "通过数学推导,证明了在有限次数的迭代后,算法的误差不会超过某个值,从而保证了算法的收敛性。",
"startTime": "00:01:47",
"endTime": "00:01:56"
}
]
},
{
"pageId": 1818,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101013_413724.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100546_146941.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101323_603019.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机学习算法,定理表明误分类次数有上界。",
"startTime": "00:00:00",
"endTime": "00:00:04"
},
{
"text": "当训练数据集线性可分时0是线性可分的严格条件,感知机学习算法原始形式迭代收敛。",
"startTime": "00:00:04",
"endTime": "00:00:12"
},
{
"text": "数据分布越紧凑R越小、",
"startTime": "00:00:12",
"endTime": "00:00:15"
},
{
"text": "分类间隔越清晰越大,权重调整更容易,算法收敛越快。",
"startTime": "00:00:15",
"endTime": "00:00:21"
},
{
"text": "这一关系强调了数据预处理如归一化和特征工程的重要性。",
"startTime": "00:00:21",
"endTime": "00:00:26"
},
{
"text": "感知机算法存在许多解,既依赖于初值,也依赖迭代过程中误分类点的选择顺序。",
"startTime": "00:00:26",
"endTime": "00:00:33"
},
{
"text": "为得到唯一分离超平面,需要增加约束,如SVM。",
"startTime": "00:00:33",
"endTime": "00:00:38"
},
{
"text": "线性不可分数据集,迭代震荡。",
"startTime": "00:00:38",
"endTime": "00:00:42"
}
]
},
{
"pageId": 1819,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101031_431130.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100547_147218.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101325_605233.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "请稍等。",
"startTime": "00:00:00",
"endTime": "00:00:01"
},
{
"text": "感知机学习算法的核心是通过调整权重和偏置来实现分类。",
"startTime": "00:00:01",
"endTime": "00:00:06"
},
{
"text": "首先,我们来看对偶形式的表达式:",
"startTime": "00:00:06",
"endTime": "00:00:10"
},
{
"text": "wiyixibiyi其中,w是权重向量,b是偏置项,yi是实例的标记,xi是实例特征向量,N是样本数量,i是拉格朗日乘子。",
"startTime": "00:00:10",
"endTime": "00:00:21"
},
{
"text": "基本想法是将w和b表示为实例和标记的线性组合,通过求解其系数来得到w和b,从而实现对误分类点的修正。",
"startTime": "00:00:21",
"endTime": "00:00:32"
},
{
"text": "最后学习到的w和b会用于构建感知机模型,以进行分类任务。",
"startTime": "00:00:32",
"endTime": "00:00:38"
}
]
},
{
"pageId": 1820,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101045_445510.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100547_147439.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101326_606983.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "感知机学习算法,对偶算法的核心是:",
"startTime": "00:00:00",
"endTime": "00:00:03"
},
{
"text": "1学习目标变成了最小化误差。",
"startTime": "00:00:03",
"endTime": "00:00:06"
},
{
"text": "2初始参数全为0。",
"startTime": "00:00:06",
"endTime": "00:00:08"
},
{
"text": "3判断是否是误分类点,如果是的话。",
"startTime": "00:00:08",
"endTime": "00:00:12"
},
{
"text": "对偶形式适用于样本个数比特征空间的维数小很多的情况。",
"startTime": "00:00:12",
"endTime": "00:00:17"
},
{
"text": "使用Gram矩阵简化计算。",
"startTime": "00:00:17",
"endTime": "00:00:20"
}
]
},
{
"pageId": 1821,
"mp3": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101107_467073.mp3",
"bgImg": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_100547_147670.png",
"humanUrl": "https://ppt-1301283434.cos.ap-beijing.myqcloud.com/2025/09/28/20250928_101328_608909.mp4",
"nodeTime": null,
"captionsList": [
{
"text": "今天,我们探讨了如何通过数学模型来解决实际问题。",
"startTime": "00:00:00",
"endTime": "00:00:05"
},
{
"text": "我们回顾了三个关键点:正例、负例和决策边界。",
"startTime": "00:00:05",
"endTime": "00:00:10"
},
{
"text": "正例和负例是我们的数据样本,而决策边界则是根据这些样本建立的分类模型。",
"startTime": "00:00:10",
"endTime": "00:00:16"
},
{
"text": "请记住这三个关键点:1正例和负例是我们训练模型的基础。",
"startTime": "00:00:16",
"endTime": "00:00:22"
},
{
"text": "2决策边界决定了模型的预测结果。",
"startTime": "00:00:22",
"endTime": "00:00:25"
},
{
"text": "3模型的性能取决于我们如何选择正例和负例。",
"startTime": "00:00:25",
"endTime": "00:00:30"
},
{
"text": "展望未来,期待与您共同探索更多有趣的数学应用。",
"startTime": "00:00:30",
"endTime": "00:00:35"
},
{
"text": "如果您有任何疑问或想要进一步讨论,请随时联系我。",
"startTime": "00:00:35",
"endTime": "00:00:39"
},
{
"text": "感谢您的聆听,欢迎交流!",
"startTime": "00:00:39",
"endTime": "00:00:42"
}
]
}
]
}
数据说明
| 字段 | 说明 |
|---|---|
humanUrl |
数字人视频URL (默认指向 /uploads/default_human.mp4) |
pptImgList |
幻灯片列表数组 |
mp3 |
幻灯片音频文件URL |
bgImg |
幻灯片背景图片URL |
captionsList |
字幕列表数组 |
text |
字幕文本内容 |
startTime |
字幕开始时间(格式:00:00:00) |